2026-01-01 00:00:06.948953 | Job console starting 2026-01-01 00:00:06.970281 | Updating git repos 2026-01-01 00:00:07.108415 | Cloning repos into workspace 2026-01-01 00:00:07.442216 | Restoring repo states 2026-01-01 00:00:07.464337 | Merging changes 2026-01-01 00:00:07.464358 | Checking out repos 2026-01-01 00:00:07.832644 | Preparing playbooks 2026-01-01 00:00:09.003720 | Running Ansible setup 2026-01-01 00:00:17.653366 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-01 00:00:21.731952 | 2026-01-01 00:00:21.732187 | PLAY [Base pre] 2026-01-01 00:00:21.821454 | 2026-01-01 00:00:21.822153 | TASK [Setup log path fact] 2026-01-01 00:00:21.877529 | orchestrator | ok 2026-01-01 00:00:21.923501 | 2026-01-01 00:00:21.923686 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-01 00:00:22.030788 | orchestrator | ok 2026-01-01 00:00:22.138865 | 2026-01-01 00:00:22.139035 | TASK [emit-job-header : Print job information] 2026-01-01 00:00:22.264723 | # Job Information 2026-01-01 00:00:22.264919 | Ansible Version: 2.16.14 2026-01-01 00:00:22.264955 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-01 00:00:22.264989 | Pipeline: periodic-midnight 2026-01-01 00:00:22.265012 | Executor: 521e9411259a 2026-01-01 00:00:22.265051 | Triggered by: https://github.com/osism/testbed 2026-01-01 00:00:22.265073 | Event ID: d1d26fc04bfb4173814cf17786ddfb96 2026-01-01 00:00:22.286654 | 2026-01-01 00:00:22.286810 | LOOP [emit-job-header : Print node information] 2026-01-01 00:00:22.844211 | orchestrator | ok: 2026-01-01 00:00:22.844500 | orchestrator | # Node Information 2026-01-01 00:00:22.844539 | orchestrator | Inventory Hostname: orchestrator 2026-01-01 00:00:22.844564 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-01 00:00:22.844586 | orchestrator | Username: zuul-testbed03 2026-01-01 00:00:22.844606 | orchestrator | Distro: Debian 12.12 2026-01-01 00:00:22.844630 | orchestrator | Provider: static-testbed 2026-01-01 00:00:22.844651 | orchestrator | Region: 2026-01-01 00:00:22.844672 | orchestrator | Label: testbed-orchestrator 2026-01-01 00:00:22.844692 | orchestrator | Product Name: OpenStack Nova 2026-01-01 00:00:22.844711 | orchestrator | Interface IP: 81.163.193.140 2026-01-01 00:00:22.882244 | 2026-01-01 00:00:22.882400 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-01 00:00:25.246464 | orchestrator -> localhost | changed 2026-01-01 00:00:25.255202 | 2026-01-01 00:00:25.255345 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-01 00:00:31.243409 | orchestrator -> localhost | changed 2026-01-01 00:00:31.276503 | 2026-01-01 00:00:31.276654 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-01 00:00:32.579122 | orchestrator -> localhost | ok 2026-01-01 00:00:32.586907 | 2026-01-01 00:00:32.587094 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-01 00:00:32.649009 | orchestrator | ok 2026-01-01 00:00:32.743321 | orchestrator | included: /var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-01 00:00:32.782923 | 2026-01-01 00:00:32.783118 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-01 00:00:41.107887 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-01 00:00:41.108184 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/work/1c6aefb8f75d46b4aa7685e460a319d2_id_rsa 2026-01-01 00:00:41.108227 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/work/1c6aefb8f75d46b4aa7685e460a319d2_id_rsa.pub 2026-01-01 00:00:41.108255 | orchestrator -> localhost | The key fingerprint is: 2026-01-01 00:00:41.108280 | orchestrator -> localhost | SHA256:iyzXPgUS4mFpMV82nlWR5F9JKasNeLDOYxh8K32eyPY zuul-build-sshkey 2026-01-01 00:00:41.108303 | orchestrator -> localhost | The key's randomart image is: 2026-01-01 00:00:41.108340 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-01 00:00:41.108363 | orchestrator -> localhost | | oo + .o+o ..| 2026-01-01 00:00:41.108386 | orchestrator -> localhost | | *o.+ = ......| 2026-01-01 00:00:41.108407 | orchestrator -> localhost | | + oo.o + . o..| 2026-01-01 00:00:41.108429 | orchestrator -> localhost | | . .o.+ o o . | 2026-01-01 00:00:41.108459 | orchestrator -> localhost | | .S.o + . | 2026-01-01 00:00:41.108491 | orchestrator -> localhost | | . = O.o . | 2026-01-01 00:00:41.108513 | orchestrator -> localhost | | . + *.= . | 2026-01-01 00:00:41.108534 | orchestrator -> localhost | | o ..+ o | 2026-01-01 00:00:41.108554 | orchestrator -> localhost | | o..E | 2026-01-01 00:00:41.108575 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-01 00:00:41.108629 | orchestrator -> localhost | ok: Runtime: 0:00:05.826109 2026-01-01 00:00:41.116826 | 2026-01-01 00:00:41.116957 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-01 00:00:41.244573 | orchestrator | ok 2026-01-01 00:00:41.380874 | orchestrator | included: /var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-01 00:00:41.463416 | 2026-01-01 00:00:41.463601 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-01 00:00:41.561744 | orchestrator | skipping: Conditional result was False 2026-01-01 00:00:41.582741 | 2026-01-01 00:00:41.582930 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-01 00:00:43.620667 | orchestrator | changed 2026-01-01 00:00:43.660321 | 2026-01-01 00:00:43.660478 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-01 00:00:44.092777 | orchestrator | ok 2026-01-01 00:00:44.116432 | 2026-01-01 00:00:44.116590 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-01 00:00:45.096194 | orchestrator | ok 2026-01-01 00:00:45.136679 | 2026-01-01 00:00:45.136905 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-01 00:00:45.826629 | orchestrator | ok 2026-01-01 00:00:45.845939 | 2026-01-01 00:00:45.846108 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-01 00:00:45.927678 | orchestrator | skipping: Conditional result was False 2026-01-01 00:00:45.935843 | 2026-01-01 00:00:45.935981 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-01 00:00:48.001677 | orchestrator -> localhost | changed 2026-01-01 00:00:48.037071 | 2026-01-01 00:00:48.058937 | TASK [add-build-sshkey : Add back temp key] 2026-01-01 00:00:49.533631 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/work/1c6aefb8f75d46b4aa7685e460a319d2_id_rsa (zuul-build-sshkey) 2026-01-01 00:00:49.533890 | orchestrator -> localhost | ok: Runtime: 0:00:00.045760 2026-01-01 00:00:49.544020 | 2026-01-01 00:00:49.544200 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-01 00:00:50.173675 | orchestrator | ok 2026-01-01 00:00:50.195819 | 2026-01-01 00:00:50.195977 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-01 00:00:50.255406 | orchestrator | skipping: Conditional result was False 2026-01-01 00:00:50.611403 | 2026-01-01 00:00:50.611550 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-01 00:00:51.682591 | orchestrator | ok 2026-01-01 00:00:51.745440 | 2026-01-01 00:00:51.745609 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-01 00:00:51.933791 | orchestrator | ok 2026-01-01 00:00:51.965777 | 2026-01-01 00:00:51.965936 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-01 00:00:54.180640 | orchestrator -> localhost | ok 2026-01-01 00:00:54.189587 | 2026-01-01 00:00:54.189721 | TASK [validate-host : Collect information about the host] 2026-01-01 00:00:57.477055 | orchestrator | ok 2026-01-01 00:00:57.565012 | 2026-01-01 00:00:57.565180 | TASK [validate-host : Sanitize hostname] 2026-01-01 00:00:57.948564 | orchestrator | ok 2026-01-01 00:00:57.979688 | 2026-01-01 00:00:57.979848 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-01 00:01:01.474185 | orchestrator -> localhost | changed 2026-01-01 00:01:01.481530 | 2026-01-01 00:01:01.481672 | TASK [validate-host : Collect information about zuul worker] 2026-01-01 00:01:02.565817 | orchestrator | ok 2026-01-01 00:01:02.577573 | 2026-01-01 00:01:02.577681 | TASK [validate-host : Write out all zuul information for each host] 2026-01-01 00:01:04.843250 | orchestrator -> localhost | changed 2026-01-01 00:01:04.865536 | 2026-01-01 00:01:04.865682 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-01 00:01:05.278373 | orchestrator | ok 2026-01-01 00:01:05.285206 | 2026-01-01 00:01:05.285344 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-01 00:02:25.060190 | orchestrator | changed: 2026-01-01 00:02:25.060452 | orchestrator | .d..t...... src/ 2026-01-01 00:02:25.060524 | orchestrator | .d..t...... src/github.com/ 2026-01-01 00:02:25.060550 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-01 00:02:25.060571 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-01 00:02:25.060592 | orchestrator | RedHat.yml 2026-01-01 00:02:25.075867 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-01 00:02:25.075885 | orchestrator | RedHat.yml 2026-01-01 00:02:25.075938 | orchestrator | = 2.2.0"... 2026-01-01 00:02:35.841928 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-01 00:02:35.860562 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-01 00:02:36.015828 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-01 00:02:36.500495 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-01 00:02:36.570614 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-01 00:02:37.105605 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-01 00:02:37.173594 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-01 00:02:37.970853 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-01 00:02:37.970921 | orchestrator | 2026-01-01 00:02:37.970928 | orchestrator | Providers are signed by their developers. 2026-01-01 00:02:37.970934 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-01 00:02:37.970954 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-01 00:02:37.971217 | orchestrator | 2026-01-01 00:02:37.971228 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-01 00:02:37.971241 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-01 00:02:37.971246 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-01 00:02:37.971261 | orchestrator | you run "tofu init" in the future. 2026-01-01 00:02:37.971896 | orchestrator | 2026-01-01 00:02:37.971952 | orchestrator | OpenTofu has been successfully initialized! 2026-01-01 00:02:37.971979 | orchestrator | 2026-01-01 00:02:37.971985 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-01 00:02:37.971990 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-01 00:02:37.971994 | orchestrator | should now work. 2026-01-01 00:02:37.971998 | orchestrator | 2026-01-01 00:02:37.972003 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-01 00:02:37.972007 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-01 00:02:37.972019 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-01 00:02:38.251718 | orchestrator | Created and switched to workspace "ci"! 2026-01-01 00:02:38.251783 | orchestrator | 2026-01-01 00:02:38.251789 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-01 00:02:38.251795 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-01 00:02:38.251802 | orchestrator | for this configuration. 2026-01-01 00:02:38.368157 | orchestrator | ci.auto.tfvars 2026-01-01 00:02:38.372830 | orchestrator | default_custom.tf 2026-01-01 00:02:39.599271 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-01 00:02:40.155560 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-01 00:02:40.490119 | orchestrator | 2026-01-01 00:02:40.490201 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-01 00:02:40.490208 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-01 00:02:40.490213 | orchestrator | + create 2026-01-01 00:02:40.490218 | orchestrator | <= read (data resources) 2026-01-01 00:02:40.490223 | orchestrator | 2026-01-01 00:02:40.490227 | orchestrator | OpenTofu will perform the following actions: 2026-01-01 00:02:40.490231 | orchestrator | 2026-01-01 00:02:40.490236 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-01 00:02:40.490240 | orchestrator | # (config refers to values not yet known) 2026-01-01 00:02:40.490244 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-01 00:02:40.490248 | orchestrator | + checksum = (known after apply) 2026-01-01 00:02:40.490252 | orchestrator | + created_at = (known after apply) 2026-01-01 00:02:40.490256 | orchestrator | + file = (known after apply) 2026-01-01 00:02:40.490259 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490281 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490286 | orchestrator | + min_disk_gb = (known after apply) 2026-01-01 00:02:40.490290 | orchestrator | + min_ram_mb = (known after apply) 2026-01-01 00:02:40.490294 | orchestrator | + most_recent = true 2026-01-01 00:02:40.490298 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.490302 | orchestrator | + protected = (known after apply) 2026-01-01 00:02:40.490306 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490315 | orchestrator | + schema = (known after apply) 2026-01-01 00:02:40.490319 | orchestrator | + size_bytes = (known after apply) 2026-01-01 00:02:40.490322 | orchestrator | + tags = (known after apply) 2026-01-01 00:02:40.490326 | orchestrator | + updated_at = (known after apply) 2026-01-01 00:02:40.490330 | orchestrator | } 2026-01-01 00:02:40.490334 | orchestrator | 2026-01-01 00:02:40.490338 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-01 00:02:40.490342 | orchestrator | # (config refers to values not yet known) 2026-01-01 00:02:40.490346 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-01 00:02:40.490350 | orchestrator | + checksum = (known after apply) 2026-01-01 00:02:40.490354 | orchestrator | + created_at = (known after apply) 2026-01-01 00:02:40.490357 | orchestrator | + file = (known after apply) 2026-01-01 00:02:40.490361 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490365 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490369 | orchestrator | + min_disk_gb = (known after apply) 2026-01-01 00:02:40.490372 | orchestrator | + min_ram_mb = (known after apply) 2026-01-01 00:02:40.490376 | orchestrator | + most_recent = true 2026-01-01 00:02:40.490380 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.490384 | orchestrator | + protected = (known after apply) 2026-01-01 00:02:40.490387 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490391 | orchestrator | + schema = (known after apply) 2026-01-01 00:02:40.490395 | orchestrator | + size_bytes = (known after apply) 2026-01-01 00:02:40.490399 | orchestrator | + tags = (known after apply) 2026-01-01 00:02:40.490402 | orchestrator | + updated_at = (known after apply) 2026-01-01 00:02:40.490406 | orchestrator | } 2026-01-01 00:02:40.490410 | orchestrator | 2026-01-01 00:02:40.490414 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-01 00:02:40.490418 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-01 00:02:40.490432 | orchestrator | + content = (known after apply) 2026-01-01 00:02:40.490436 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:40.490440 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:40.490444 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:40.490448 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:40.490451 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:40.490455 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:40.490459 | orchestrator | + directory_permission = "0777" 2026-01-01 00:02:40.490463 | orchestrator | + file_permission = "0644" 2026-01-01 00:02:40.490467 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-01 00:02:40.490470 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490474 | orchestrator | } 2026-01-01 00:02:40.490478 | orchestrator | 2026-01-01 00:02:40.490482 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-01 00:02:40.490486 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-01 00:02:40.490489 | orchestrator | + content = (known after apply) 2026-01-01 00:02:40.490493 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:40.490497 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:40.490501 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:40.490505 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:40.490508 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:40.490516 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:40.490520 | orchestrator | + directory_permission = "0777" 2026-01-01 00:02:40.490524 | orchestrator | + file_permission = "0644" 2026-01-01 00:02:40.490531 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-01 00:02:40.490535 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490539 | orchestrator | } 2026-01-01 00:02:40.490543 | orchestrator | 2026-01-01 00:02:40.490547 | orchestrator | # local_file.inventory will be created 2026-01-01 00:02:40.490550 | orchestrator | + resource "local_file" "inventory" { 2026-01-01 00:02:40.490554 | orchestrator | + content = (known after apply) 2026-01-01 00:02:40.490558 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:40.490562 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:40.490565 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:40.490569 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:40.490573 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:40.490577 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:40.490581 | orchestrator | + directory_permission = "0777" 2026-01-01 00:02:40.490585 | orchestrator | + file_permission = "0644" 2026-01-01 00:02:40.490588 | orchestrator | + filename = "inventory.ci" 2026-01-01 00:02:40.490592 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490596 | orchestrator | } 2026-01-01 00:02:40.490600 | orchestrator | 2026-01-01 00:02:40.490604 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-01 00:02:40.490607 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-01 00:02:40.490611 | orchestrator | + content = (sensitive value) 2026-01-01 00:02:40.490615 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-01 00:02:40.490619 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-01 00:02:40.490623 | orchestrator | + content_md5 = (known after apply) 2026-01-01 00:02:40.490626 | orchestrator | + content_sha1 = (known after apply) 2026-01-01 00:02:40.490630 | orchestrator | + content_sha256 = (known after apply) 2026-01-01 00:02:40.490648 | orchestrator | + content_sha512 = (known after apply) 2026-01-01 00:02:40.490652 | orchestrator | + directory_permission = "0700" 2026-01-01 00:02:40.490656 | orchestrator | + file_permission = "0600" 2026-01-01 00:02:40.490659 | orchestrator | + filename = ".id_rsa.ci" 2026-01-01 00:02:40.490663 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490667 | orchestrator | } 2026-01-01 00:02:40.490671 | orchestrator | 2026-01-01 00:02:40.490675 | orchestrator | # null_resource.node_semaphore will be created 2026-01-01 00:02:40.490678 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-01 00:02:40.490682 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490686 | orchestrator | } 2026-01-01 00:02:40.490690 | orchestrator | 2026-01-01 00:02:40.490694 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-01 00:02:40.490698 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-01 00:02:40.490701 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.490705 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.490709 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490713 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.490717 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490721 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-01 00:02:40.490724 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490728 | orchestrator | + size = 80 2026-01-01 00:02:40.490732 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.490736 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.490739 | orchestrator | } 2026-01-01 00:02:40.490743 | orchestrator | 2026-01-01 00:02:40.490747 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-01 00:02:40.490751 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:40.490755 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.490758 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.490762 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490769 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.490773 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490777 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-01 00:02:40.490781 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490784 | orchestrator | + size = 80 2026-01-01 00:02:40.490788 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.490792 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.490796 | orchestrator | } 2026-01-01 00:02:40.490800 | orchestrator | 2026-01-01 00:02:40.490803 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-01 00:02:40.490807 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:40.490811 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.490815 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.490819 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490822 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.490826 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490830 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-01 00:02:40.490834 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490838 | orchestrator | + size = 80 2026-01-01 00:02:40.490841 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.490845 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.490849 | orchestrator | } 2026-01-01 00:02:40.490853 | orchestrator | 2026-01-01 00:02:40.490856 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-01 00:02:40.490860 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:40.490864 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.490868 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.490871 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490875 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.490879 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490883 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-01 00:02:40.490886 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490890 | orchestrator | + size = 80 2026-01-01 00:02:40.490897 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.490901 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.490904 | orchestrator | } 2026-01-01 00:02:40.490908 | orchestrator | 2026-01-01 00:02:40.490912 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-01 00:02:40.490916 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:40.490919 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.490923 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.490927 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490931 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.490935 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490938 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-01 00:02:40.490942 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.490946 | orchestrator | + size = 80 2026-01-01 00:02:40.490950 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.490953 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.490957 | orchestrator | } 2026-01-01 00:02:40.490961 | orchestrator | 2026-01-01 00:02:40.490965 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-01 00:02:40.490969 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:40.490972 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.490976 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.490980 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.490988 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.490992 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.490995 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-01 00:02:40.490999 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.491003 | orchestrator | + size = 80 2026-01-01 00:02:40.491007 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.491011 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.491014 | orchestrator | } 2026-01-01 00:02:40.491369 | orchestrator | 2026-01-01 00:02:40.491390 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-01 00:02:40.491395 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-01 00:02:40.491398 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.491402 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.491406 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.491410 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.491414 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.491418 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-01 00:02:40.491455 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.491460 | orchestrator | + size = 80 2026-01-01 00:02:40.491464 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.491467 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.491471 | orchestrator | } 2026-01-01 00:02:40.491587 | orchestrator | 2026-01-01 00:02:40.491600 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-01 00:02:40.491605 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.491608 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.491612 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.491616 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.491619 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.491624 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-01 00:02:40.491628 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.491632 | orchestrator | + size = 20 2026-01-01 00:02:40.491635 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.491640 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.491644 | orchestrator | } 2026-01-01 00:02:40.491717 | orchestrator | 2026-01-01 00:02:40.491729 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-01 00:02:40.491733 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.491737 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.491741 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.491744 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.491748 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.491752 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-01 00:02:40.491756 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.491760 | orchestrator | + size = 20 2026-01-01 00:02:40.491764 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.491768 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.491771 | orchestrator | } 2026-01-01 00:02:40.491848 | orchestrator | 2026-01-01 00:02:40.491859 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-01 00:02:40.491864 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.491868 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.491872 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.491875 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.491879 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.491883 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-01 00:02:40.491887 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.491897 | orchestrator | + size = 20 2026-01-01 00:02:40.491901 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.491904 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.491908 | orchestrator | } 2026-01-01 00:02:40.491977 | orchestrator | 2026-01-01 00:02:40.491988 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-01 00:02:40.491992 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.491996 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.492000 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492004 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.492011 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.492015 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-01 00:02:40.492019 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.492023 | orchestrator | + size = 20 2026-01-01 00:02:40.492027 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.492030 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.492034 | orchestrator | } 2026-01-01 00:02:40.492103 | orchestrator | 2026-01-01 00:02:40.492115 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-01 00:02:40.492119 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.492123 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.492126 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492130 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.492134 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.492138 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-01 00:02:40.492141 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.492145 | orchestrator | + size = 20 2026-01-01 00:02:40.492149 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.492153 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.492156 | orchestrator | } 2026-01-01 00:02:40.492232 | orchestrator | 2026-01-01 00:02:40.492243 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-01 00:02:40.492247 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.492251 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.492255 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492259 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.492262 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.492266 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-01 00:02:40.492270 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.492274 | orchestrator | + size = 20 2026-01-01 00:02:40.492277 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.492281 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.492285 | orchestrator | } 2026-01-01 00:02:40.492351 | orchestrator | 2026-01-01 00:02:40.492362 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-01 00:02:40.492366 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.492370 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.492374 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492378 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.492382 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.492385 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-01 00:02:40.492389 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.492393 | orchestrator | + size = 20 2026-01-01 00:02:40.492396 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.492400 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.492404 | orchestrator | } 2026-01-01 00:02:40.492490 | orchestrator | 2026-01-01 00:02:40.492501 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-01 00:02:40.492506 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.492513 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.492517 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492521 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.492525 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.492528 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-01 00:02:40.492532 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.492536 | orchestrator | + size = 20 2026-01-01 00:02:40.492540 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.492543 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.492547 | orchestrator | } 2026-01-01 00:02:40.492623 | orchestrator | 2026-01-01 00:02:40.492635 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-01 00:02:40.492639 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-01 00:02:40.492643 | orchestrator | + attachment = (known after apply) 2026-01-01 00:02:40.492647 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492650 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.492654 | orchestrator | + metadata = (known after apply) 2026-01-01 00:02:40.492658 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-01 00:02:40.492662 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.492665 | orchestrator | + size = 20 2026-01-01 00:02:40.492669 | orchestrator | + volume_retype_policy = "never" 2026-01-01 00:02:40.492673 | orchestrator | + volume_type = "ssd" 2026-01-01 00:02:40.492678 | orchestrator | } 2026-01-01 00:02:40.492940 | orchestrator | 2026-01-01 00:02:40.492956 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-01 00:02:40.492961 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-01 00:02:40.492965 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.492969 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.492972 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.492976 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.492980 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.492984 | orchestrator | + config_drive = true 2026-01-01 00:02:40.492991 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.492995 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.492998 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-01 00:02:40.493002 | orchestrator | + force_delete = false 2026-01-01 00:02:40.493006 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.493010 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.493013 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.493017 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.493021 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.493025 | orchestrator | + name = "testbed-manager" 2026-01-01 00:02:40.493028 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.493032 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.493036 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.493040 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.493044 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.493047 | orchestrator | + user_data = (sensitive value) 2026-01-01 00:02:40.493051 | orchestrator | 2026-01-01 00:02:40.493055 | orchestrator | + block_device { 2026-01-01 00:02:40.493059 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.493063 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.493066 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.493070 | orchestrator | + multiattach = false 2026-01-01 00:02:40.493074 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.493077 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.493087 | orchestrator | } 2026-01-01 00:02:40.493091 | orchestrator | 2026-01-01 00:02:40.493095 | orchestrator | + network { 2026-01-01 00:02:40.493099 | orchestrator | + access_network = false 2026-01-01 00:02:40.493102 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.493106 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.493110 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.493114 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.493118 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.493121 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.493125 | orchestrator | } 2026-01-01 00:02:40.493129 | orchestrator | } 2026-01-01 00:02:40.493319 | orchestrator | 2026-01-01 00:02:40.493331 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-01 00:02:40.493335 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:40.493339 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.493343 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.493347 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.493351 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.493354 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.493358 | orchestrator | + config_drive = true 2026-01-01 00:02:40.493362 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.493366 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.493369 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:40.493373 | orchestrator | + force_delete = false 2026-01-01 00:02:40.493377 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.493381 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.493385 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.493388 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.493392 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.493396 | orchestrator | + name = "testbed-node-0" 2026-01-01 00:02:40.493400 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.493403 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.493407 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.493411 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.493414 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.493418 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:40.493454 | orchestrator | 2026-01-01 00:02:40.493458 | orchestrator | + block_device { 2026-01-01 00:02:40.493462 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.493466 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.493470 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.493474 | orchestrator | + multiattach = false 2026-01-01 00:02:40.493477 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.493481 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.493485 | orchestrator | } 2026-01-01 00:02:40.493489 | orchestrator | 2026-01-01 00:02:40.493492 | orchestrator | + network { 2026-01-01 00:02:40.493496 | orchestrator | + access_network = false 2026-01-01 00:02:40.493500 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.493504 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.493508 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.493511 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.493515 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.493519 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.493523 | orchestrator | } 2026-01-01 00:02:40.493526 | orchestrator | } 2026-01-01 00:02:40.493724 | orchestrator | 2026-01-01 00:02:40.493736 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-01 00:02:40.493741 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:40.493745 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.493753 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.493757 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.493761 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.493765 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.493769 | orchestrator | + config_drive = true 2026-01-01 00:02:40.493772 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.493776 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.493780 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:40.493784 | orchestrator | + force_delete = false 2026-01-01 00:02:40.493787 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.493791 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.493795 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.493799 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.493803 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.493806 | orchestrator | + name = "testbed-node-1" 2026-01-01 00:02:40.493810 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.493814 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.493818 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.493821 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.493825 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.493833 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:40.493837 | orchestrator | 2026-01-01 00:02:40.493841 | orchestrator | + block_device { 2026-01-01 00:02:40.493845 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.493848 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.493852 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.493856 | orchestrator | + multiattach = false 2026-01-01 00:02:40.493860 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.493863 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.493867 | orchestrator | } 2026-01-01 00:02:40.493871 | orchestrator | 2026-01-01 00:02:40.493875 | orchestrator | + network { 2026-01-01 00:02:40.493878 | orchestrator | + access_network = false 2026-01-01 00:02:40.493882 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.493886 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.493890 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.493894 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.493897 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.493901 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.493905 | orchestrator | } 2026-01-01 00:02:40.493909 | orchestrator | } 2026-01-01 00:02:40.494140 | orchestrator | 2026-01-01 00:02:40.494154 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-01 00:02:40.494158 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:40.494162 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.494166 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.494171 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.494174 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.494178 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.494182 | orchestrator | + config_drive = true 2026-01-01 00:02:40.494186 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.494189 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.494193 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:40.494197 | orchestrator | + force_delete = false 2026-01-01 00:02:40.494201 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.494204 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.494208 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.494217 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.494221 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.494225 | orchestrator | + name = "testbed-node-2" 2026-01-01 00:02:40.494229 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.494232 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.494236 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.494240 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.494244 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.494248 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:40.494251 | orchestrator | 2026-01-01 00:02:40.494255 | orchestrator | + block_device { 2026-01-01 00:02:40.494259 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.494263 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.494266 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.494270 | orchestrator | + multiattach = false 2026-01-01 00:02:40.494274 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.494278 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.494281 | orchestrator | } 2026-01-01 00:02:40.494285 | orchestrator | 2026-01-01 00:02:40.494289 | orchestrator | + network { 2026-01-01 00:02:40.494293 | orchestrator | + access_network = false 2026-01-01 00:02:40.494296 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.494300 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.494304 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.494308 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.494311 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.494315 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.494319 | orchestrator | } 2026-01-01 00:02:40.494323 | orchestrator | } 2026-01-01 00:02:40.494533 | orchestrator | 2026-01-01 00:02:40.494548 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-01 00:02:40.494553 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:40.494557 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.494561 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.494564 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.494568 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.494572 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.494576 | orchestrator | + config_drive = true 2026-01-01 00:02:40.494580 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.494583 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.494587 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:40.494591 | orchestrator | + force_delete = false 2026-01-01 00:02:40.494595 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.494599 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.494602 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.494606 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.494610 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.494614 | orchestrator | + name = "testbed-node-3" 2026-01-01 00:02:40.494618 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.494621 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.494625 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.494629 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.494633 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.494636 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:40.494640 | orchestrator | 2026-01-01 00:02:40.494644 | orchestrator | + block_device { 2026-01-01 00:02:40.494648 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.494651 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.494655 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.494663 | orchestrator | + multiattach = false 2026-01-01 00:02:40.494666 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.494670 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.494674 | orchestrator | } 2026-01-01 00:02:40.494678 | orchestrator | 2026-01-01 00:02:40.494681 | orchestrator | + network { 2026-01-01 00:02:40.494685 | orchestrator | + access_network = false 2026-01-01 00:02:40.494689 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.494693 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.494696 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.494700 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.494704 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.494708 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.494711 | orchestrator | } 2026-01-01 00:02:40.494715 | orchestrator | } 2026-01-01 00:02:40.494894 | orchestrator | 2026-01-01 00:02:40.494906 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-01 00:02:40.494910 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:40.494914 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.494918 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.494922 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.494925 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.494929 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.494933 | orchestrator | + config_drive = true 2026-01-01 00:02:40.494937 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.494941 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.494944 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:40.494948 | orchestrator | + force_delete = false 2026-01-01 00:02:40.494952 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.494956 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.494960 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.494963 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.494967 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.494971 | orchestrator | + name = "testbed-node-4" 2026-01-01 00:02:40.494975 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.494979 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.494983 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.494986 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.494990 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.494994 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:40.494998 | orchestrator | 2026-01-01 00:02:40.495002 | orchestrator | + block_device { 2026-01-01 00:02:40.495006 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.495009 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.495013 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.495017 | orchestrator | + multiattach = false 2026-01-01 00:02:40.495021 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.495025 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.495028 | orchestrator | } 2026-01-01 00:02:40.495032 | orchestrator | 2026-01-01 00:02:40.495036 | orchestrator | + network { 2026-01-01 00:02:40.495040 | orchestrator | + access_network = false 2026-01-01 00:02:40.495044 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.495047 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.495051 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.495055 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.495059 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.495063 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.495066 | orchestrator | } 2026-01-01 00:02:40.495070 | orchestrator | } 2026-01-01 00:02:40.495253 | orchestrator | 2026-01-01 00:02:40.495264 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-01 00:02:40.495269 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-01 00:02:40.495272 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-01 00:02:40.495276 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-01 00:02:40.495280 | orchestrator | + all_metadata = (known after apply) 2026-01-01 00:02:40.495284 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.495288 | orchestrator | + availability_zone = "nova" 2026-01-01 00:02:40.495292 | orchestrator | + config_drive = true 2026-01-01 00:02:40.495295 | orchestrator | + created = (known after apply) 2026-01-01 00:02:40.495299 | orchestrator | + flavor_id = (known after apply) 2026-01-01 00:02:40.495303 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-01 00:02:40.495307 | orchestrator | + force_delete = false 2026-01-01 00:02:40.495311 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-01 00:02:40.495314 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495318 | orchestrator | + image_id = (known after apply) 2026-01-01 00:02:40.495322 | orchestrator | + image_name = (known after apply) 2026-01-01 00:02:40.495326 | orchestrator | + key_pair = "testbed" 2026-01-01 00:02:40.495330 | orchestrator | + name = "testbed-node-5" 2026-01-01 00:02:40.495333 | orchestrator | + power_state = "active" 2026-01-01 00:02:40.495337 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495341 | orchestrator | + security_groups = (known after apply) 2026-01-01 00:02:40.495345 | orchestrator | + stop_before_destroy = false 2026-01-01 00:02:40.495348 | orchestrator | + updated = (known after apply) 2026-01-01 00:02:40.495352 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-01 00:02:40.495356 | orchestrator | 2026-01-01 00:02:40.495360 | orchestrator | + block_device { 2026-01-01 00:02:40.495363 | orchestrator | + boot_index = 0 2026-01-01 00:02:40.495367 | orchestrator | + delete_on_termination = false 2026-01-01 00:02:40.495371 | orchestrator | + destination_type = "volume" 2026-01-01 00:02:40.495375 | orchestrator | + multiattach = false 2026-01-01 00:02:40.495378 | orchestrator | + source_type = "volume" 2026-01-01 00:02:40.495382 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.495386 | orchestrator | } 2026-01-01 00:02:40.495390 | orchestrator | 2026-01-01 00:02:40.495394 | orchestrator | + network { 2026-01-01 00:02:40.495397 | orchestrator | + access_network = false 2026-01-01 00:02:40.495401 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-01 00:02:40.495405 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-01 00:02:40.495409 | orchestrator | + mac = (known after apply) 2026-01-01 00:02:40.495413 | orchestrator | + name = (known after apply) 2026-01-01 00:02:40.495416 | orchestrator | + port = (known after apply) 2026-01-01 00:02:40.495420 | orchestrator | + uuid = (known after apply) 2026-01-01 00:02:40.495438 | orchestrator | } 2026-01-01 00:02:40.495442 | orchestrator | } 2026-01-01 00:02:40.495488 | orchestrator | 2026-01-01 00:02:40.495499 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-01 00:02:40.495503 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-01 00:02:40.495507 | orchestrator | + fingerprint = (known after apply) 2026-01-01 00:02:40.495511 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495515 | orchestrator | + name = "testbed" 2026-01-01 00:02:40.495519 | orchestrator | + private_key = (sensitive value) 2026-01-01 00:02:40.495522 | orchestrator | + public_key = (known after apply) 2026-01-01 00:02:40.495526 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495530 | orchestrator | + user_id = (known after apply) 2026-01-01 00:02:40.495534 | orchestrator | } 2026-01-01 00:02:40.495575 | orchestrator | 2026-01-01 00:02:40.495585 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-01 00:02:40.495590 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.495598 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.495601 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495605 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.495609 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495619 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.495623 | orchestrator | } 2026-01-01 00:02:40.495662 | orchestrator | 2026-01-01 00:02:40.495673 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-01 00:02:40.495678 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.495682 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.495685 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495689 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.495693 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495697 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.495701 | orchestrator | } 2026-01-01 00:02:40.495738 | orchestrator | 2026-01-01 00:02:40.495749 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-01 00:02:40.495753 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.495757 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.495761 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495765 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.495768 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495772 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.495776 | orchestrator | } 2026-01-01 00:02:40.495811 | orchestrator | 2026-01-01 00:02:40.495822 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-01 00:02:40.495827 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.495831 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.495834 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495838 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.495842 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495846 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.495850 | orchestrator | } 2026-01-01 00:02:40.495881 | orchestrator | 2026-01-01 00:02:40.495892 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-01 00:02:40.495896 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.495900 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.495903 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495907 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.495911 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495915 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.495918 | orchestrator | } 2026-01-01 00:02:40.495950 | orchestrator | 2026-01-01 00:02:40.495961 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-01 00:02:40.495965 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.495969 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.495973 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.495976 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.495980 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.495984 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.495988 | orchestrator | } 2026-01-01 00:02:40.496029 | orchestrator | 2026-01-01 00:02:40.496040 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-01 00:02:40.496044 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.496048 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.496052 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496056 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.496059 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496066 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.496070 | orchestrator | } 2026-01-01 00:02:40.496103 | orchestrator | 2026-01-01 00:02:40.496114 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-01 00:02:40.496119 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.496122 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.496126 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496130 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.496134 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496138 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.496141 | orchestrator | } 2026-01-01 00:02:40.496175 | orchestrator | 2026-01-01 00:02:40.496185 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-01 00:02:40.496190 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-01 00:02:40.496193 | orchestrator | + device = (known after apply) 2026-01-01 00:02:40.496197 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496201 | orchestrator | + instance_id = (known after apply) 2026-01-01 00:02:40.496205 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496208 | orchestrator | + volume_id = (known after apply) 2026-01-01 00:02:40.496212 | orchestrator | } 2026-01-01 00:02:40.496253 | orchestrator | 2026-01-01 00:02:40.496264 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-01 00:02:40.496269 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-01 00:02:40.496273 | orchestrator | + fixed_ip = (known after apply) 2026-01-01 00:02:40.496277 | orchestrator | + floating_ip = (known after apply) 2026-01-01 00:02:40.496281 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496284 | orchestrator | + port_id = (known after apply) 2026-01-01 00:02:40.496288 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496292 | orchestrator | } 2026-01-01 00:02:40.496355 | orchestrator | 2026-01-01 00:02:40.496366 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-01 00:02:40.496370 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-01 00:02:40.496374 | orchestrator | + address = (known after apply) 2026-01-01 00:02:40.496378 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.496385 | orchestrator | + dns_domain = (known after apply) 2026-01-01 00:02:40.496389 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.496393 | orchestrator | + fixed_ip = (known after apply) 2026-01-01 00:02:40.496396 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496400 | orchestrator | + pool = "public" 2026-01-01 00:02:40.496404 | orchestrator | + port_id = (known after apply) 2026-01-01 00:02:40.496408 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496411 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.496415 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.496419 | orchestrator | } 2026-01-01 00:02:40.496545 | orchestrator | 2026-01-01 00:02:40.496557 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-01 00:02:40.496562 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-01 00:02:40.496565 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.496569 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.496573 | orchestrator | + availability_zone_hints = [ 2026-01-01 00:02:40.496577 | orchestrator | + "nova", 2026-01-01 00:02:40.496581 | orchestrator | ] 2026-01-01 00:02:40.496585 | orchestrator | + dns_domain = (known after apply) 2026-01-01 00:02:40.496589 | orchestrator | + external = (known after apply) 2026-01-01 00:02:40.496593 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496596 | orchestrator | + mtu = (known after apply) 2026-01-01 00:02:40.496600 | orchestrator | + name = "net-testbed-management" 2026-01-01 00:02:40.496604 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.496613 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.496616 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496620 | orchestrator | + shared = (known after apply) 2026-01-01 00:02:40.496624 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.496628 | orchestrator | + transparent_vlan = (known after apply) 2026-01-01 00:02:40.496632 | orchestrator | 2026-01-01 00:02:40.496635 | orchestrator | + segments (known after apply) 2026-01-01 00:02:40.496639 | orchestrator | } 2026-01-01 00:02:40.496771 | orchestrator | 2026-01-01 00:02:40.496783 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-01 00:02:40.496787 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-01 00:02:40.496791 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.496795 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.496799 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.496802 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.496806 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.496810 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.496814 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.496818 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.496821 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.496825 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.496829 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.496832 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.496836 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.496840 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.496844 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.496847 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.496851 | orchestrator | 2026-01-01 00:02:40.496855 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.496859 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.496862 | orchestrator | } 2026-01-01 00:02:40.496866 | orchestrator | 2026-01-01 00:02:40.496870 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.496874 | orchestrator | 2026-01-01 00:02:40.496878 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.496881 | orchestrator | + ip_address = "192.168.16.5" 2026-01-01 00:02:40.496885 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.496889 | orchestrator | } 2026-01-01 00:02:40.496893 | orchestrator | } 2026-01-01 00:02:40.497028 | orchestrator | 2026-01-01 00:02:40.497040 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-01 00:02:40.497044 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:40.497048 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.497052 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.497055 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.497059 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.497063 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.497067 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.497071 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.497074 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.497078 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.497082 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.497085 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.497089 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.497093 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.497097 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.497104 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.497108 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.497112 | orchestrator | 2026-01-01 00:02:40.497116 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497120 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:40.497123 | orchestrator | } 2026-01-01 00:02:40.497127 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497131 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.497135 | orchestrator | } 2026-01-01 00:02:40.497139 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497142 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:40.497146 | orchestrator | } 2026-01-01 00:02:40.497150 | orchestrator | 2026-01-01 00:02:40.497154 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.497157 | orchestrator | 2026-01-01 00:02:40.497161 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.497165 | orchestrator | + ip_address = "192.168.16.10" 2026-01-01 00:02:40.497169 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.497173 | orchestrator | } 2026-01-01 00:02:40.497176 | orchestrator | } 2026-01-01 00:02:40.497317 | orchestrator | 2026-01-01 00:02:40.497328 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-01 00:02:40.497333 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:40.497340 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.497344 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.497348 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.497352 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.497355 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.497359 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.497363 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.497367 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.497370 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.497374 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.497378 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.497382 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.497385 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.497389 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.497393 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.497397 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.497401 | orchestrator | 2026-01-01 00:02:40.497404 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497408 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:40.497412 | orchestrator | } 2026-01-01 00:02:40.497416 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497420 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.497433 | orchestrator | } 2026-01-01 00:02:40.497437 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497440 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:40.497444 | orchestrator | } 2026-01-01 00:02:40.497448 | orchestrator | 2026-01-01 00:02:40.497452 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.497455 | orchestrator | 2026-01-01 00:02:40.497459 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.497463 | orchestrator | + ip_address = "192.168.16.11" 2026-01-01 00:02:40.497467 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.497471 | orchestrator | } 2026-01-01 00:02:40.497474 | orchestrator | } 2026-01-01 00:02:40.497619 | orchestrator | 2026-01-01 00:02:40.497632 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-01 00:02:40.497636 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:40.497640 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.497644 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.497647 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.497651 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.497659 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.497662 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.497666 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.497670 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.497674 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.497677 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.497681 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.497685 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.497688 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.497692 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.497696 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.497700 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.497703 | orchestrator | 2026-01-01 00:02:40.497707 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497711 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:40.497714 | orchestrator | } 2026-01-01 00:02:40.497718 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497722 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.497726 | orchestrator | } 2026-01-01 00:02:40.497729 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497733 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:40.497737 | orchestrator | } 2026-01-01 00:02:40.497741 | orchestrator | 2026-01-01 00:02:40.497744 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.497748 | orchestrator | 2026-01-01 00:02:40.497752 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.497756 | orchestrator | + ip_address = "192.168.16.12" 2026-01-01 00:02:40.497759 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.497763 | orchestrator | } 2026-01-01 00:02:40.497767 | orchestrator | } 2026-01-01 00:02:40.497901 | orchestrator | 2026-01-01 00:02:40.497913 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-01 00:02:40.497917 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:40.497921 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.497925 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.497929 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.497932 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.497936 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.497940 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.497944 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.497947 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.497951 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.497955 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.497959 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.497963 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.497966 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.497970 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.497974 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.497978 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.497981 | orchestrator | 2026-01-01 00:02:40.497985 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.497989 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:40.497993 | orchestrator | } 2026-01-01 00:02:40.497997 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498000 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.498004 | orchestrator | } 2026-01-01 00:02:40.498008 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498012 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:40.498038 | orchestrator | } 2026-01-01 00:02:40.498043 | orchestrator | 2026-01-01 00:02:40.498052 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.498055 | orchestrator | 2026-01-01 00:02:40.498059 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.498063 | orchestrator | + ip_address = "192.168.16.13" 2026-01-01 00:02:40.498067 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.498071 | orchestrator | } 2026-01-01 00:02:40.498075 | orchestrator | } 2026-01-01 00:02:40.498217 | orchestrator | 2026-01-01 00:02:40.498228 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-01 00:02:40.498233 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:40.498236 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.498240 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.498244 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.498248 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.498252 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.498255 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.498259 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.498263 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.498288 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.498292 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.498296 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.498299 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.498303 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.498307 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.498311 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.498315 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.498319 | orchestrator | 2026-01-01 00:02:40.498323 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498330 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:40.498334 | orchestrator | } 2026-01-01 00:02:40.498338 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498341 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.498345 | orchestrator | } 2026-01-01 00:02:40.498349 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498353 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:40.498357 | orchestrator | } 2026-01-01 00:02:40.498360 | orchestrator | 2026-01-01 00:02:40.498364 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.498368 | orchestrator | 2026-01-01 00:02:40.498372 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.498375 | orchestrator | + ip_address = "192.168.16.14" 2026-01-01 00:02:40.498379 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.498383 | orchestrator | } 2026-01-01 00:02:40.498387 | orchestrator | } 2026-01-01 00:02:40.498539 | orchestrator | 2026-01-01 00:02:40.498551 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-01 00:02:40.498555 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-01 00:02:40.498559 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.498563 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-01 00:02:40.498567 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-01 00:02:40.498571 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.498574 | orchestrator | + device_id = (known after apply) 2026-01-01 00:02:40.498578 | orchestrator | + device_owner = (known after apply) 2026-01-01 00:02:40.498582 | orchestrator | + dns_assignment = (known after apply) 2026-01-01 00:02:40.498586 | orchestrator | + dns_name = (known after apply) 2026-01-01 00:02:40.498589 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.498593 | orchestrator | + mac_address = (known after apply) 2026-01-01 00:02:40.498597 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.498600 | orchestrator | + port_security_enabled = (known after apply) 2026-01-01 00:02:40.498604 | orchestrator | + qos_policy_id = (known after apply) 2026-01-01 00:02:40.498615 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.498619 | orchestrator | + security_group_ids = (known after apply) 2026-01-01 00:02:40.498623 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.498627 | orchestrator | 2026-01-01 00:02:40.498631 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498634 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-01 00:02:40.498638 | orchestrator | } 2026-01-01 00:02:40.498642 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498646 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-01 00:02:40.498649 | orchestrator | } 2026-01-01 00:02:40.498653 | orchestrator | + allowed_address_pairs { 2026-01-01 00:02:40.498657 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-01 00:02:40.498661 | orchestrator | } 2026-01-01 00:02:40.498665 | orchestrator | 2026-01-01 00:02:40.498668 | orchestrator | + binding (known after apply) 2026-01-01 00:02:40.498672 | orchestrator | 2026-01-01 00:02:40.498676 | orchestrator | + fixed_ip { 2026-01-01 00:02:40.498680 | orchestrator | + ip_address = "192.168.16.15" 2026-01-01 00:02:40.498683 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.498687 | orchestrator | } 2026-01-01 00:02:40.498691 | orchestrator | } 2026-01-01 00:02:40.498734 | orchestrator | 2026-01-01 00:02:40.498745 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-01 00:02:40.498749 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-01 00:02:40.498753 | orchestrator | + force_destroy = false 2026-01-01 00:02:40.498757 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.498761 | orchestrator | + port_id = (known after apply) 2026-01-01 00:02:40.498764 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.498768 | orchestrator | + router_id = (known after apply) 2026-01-01 00:02:40.498772 | orchestrator | + subnet_id = (known after apply) 2026-01-01 00:02:40.498776 | orchestrator | } 2026-01-01 00:02:40.498862 | orchestrator | 2026-01-01 00:02:40.498873 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-01 00:02:40.498877 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-01 00:02:40.498881 | orchestrator | + admin_state_up = (known after apply) 2026-01-01 00:02:40.498885 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.498889 | orchestrator | + availability_zone_hints = [ 2026-01-01 00:02:40.498892 | orchestrator | + "nova", 2026-01-01 00:02:40.498896 | orchestrator | ] 2026-01-01 00:02:40.498900 | orchestrator | + distributed = (known after apply) 2026-01-01 00:02:40.498904 | orchestrator | + enable_snat = (known after apply) 2026-01-01 00:02:40.498908 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-01 00:02:40.498911 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-01 00:02:40.498915 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.498919 | orchestrator | + name = "testbed" 2026-01-01 00:02:40.498923 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.498927 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.498930 | orchestrator | 2026-01-01 00:02:40.498934 | orchestrator | + external_fixed_ip (known after apply) 2026-01-01 00:02:40.498938 | orchestrator | } 2026-01-01 00:02:40.499018 | orchestrator | 2026-01-01 00:02:40.499029 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-01 00:02:40.499035 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-01 00:02:40.499039 | orchestrator | + description = "ssh" 2026-01-01 00:02:40.499043 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499046 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499050 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499054 | orchestrator | + port_range_max = 22 2026-01-01 00:02:40.499058 | orchestrator | + port_range_min = 22 2026-01-01 00:02:40.499062 | orchestrator | + protocol = "tcp" 2026-01-01 00:02:40.499065 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499074 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499077 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499081 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.499085 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499089 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499093 | orchestrator | } 2026-01-01 00:02:40.499175 | orchestrator | 2026-01-01 00:02:40.499186 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-01 00:02:40.499190 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-01 00:02:40.499194 | orchestrator | + description = "wireguard" 2026-01-01 00:02:40.499198 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499202 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499206 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499209 | orchestrator | + port_range_max = 51820 2026-01-01 00:02:40.499213 | orchestrator | + port_range_min = 51820 2026-01-01 00:02:40.499217 | orchestrator | + protocol = "udp" 2026-01-01 00:02:40.499221 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499225 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499228 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499232 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.499236 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499240 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499243 | orchestrator | } 2026-01-01 00:02:40.499306 | orchestrator | 2026-01-01 00:02:40.499317 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-01 00:02:40.499321 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-01 00:02:40.499328 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499332 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499336 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499340 | orchestrator | + protocol = "tcp" 2026-01-01 00:02:40.499344 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499347 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499351 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499355 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-01 00:02:40.499359 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499363 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499367 | orchestrator | } 2026-01-01 00:02:40.499458 | orchestrator | 2026-01-01 00:02:40.499470 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-01 00:02:40.499475 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-01 00:02:40.499479 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499482 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499486 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499490 | orchestrator | + protocol = "udp" 2026-01-01 00:02:40.499494 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499498 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499501 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499505 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-01 00:02:40.499509 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499513 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499517 | orchestrator | } 2026-01-01 00:02:40.499575 | orchestrator | 2026-01-01 00:02:40.499586 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-01 00:02:40.499599 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-01 00:02:40.499603 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499607 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499611 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499615 | orchestrator | + protocol = "icmp" 2026-01-01 00:02:40.499618 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499622 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499626 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499630 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.499634 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499638 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499642 | orchestrator | } 2026-01-01 00:02:40.499706 | orchestrator | 2026-01-01 00:02:40.499716 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-01 00:02:40.499721 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-01 00:02:40.499725 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499728 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499732 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499736 | orchestrator | + protocol = "tcp" 2026-01-01 00:02:40.499740 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499744 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499747 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499751 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.499755 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499759 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499762 | orchestrator | } 2026-01-01 00:02:40.499836 | orchestrator | 2026-01-01 00:02:40.499850 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-01 00:02:40.499854 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-01 00:02:40.499858 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499862 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499866 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499869 | orchestrator | + protocol = "udp" 2026-01-01 00:02:40.499873 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.499877 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.499881 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.499885 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.499888 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.499892 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.499896 | orchestrator | } 2026-01-01 00:02:40.499966 | orchestrator | 2026-01-01 00:02:40.499977 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-01 00:02:40.499981 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-01 00:02:40.499985 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.499989 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.499992 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.499996 | orchestrator | + protocol = "icmp" 2026-01-01 00:02:40.500000 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.500004 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.500008 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.500011 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.500015 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.500019 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.500027 | orchestrator | } 2026-01-01 00:02:40.500096 | orchestrator | 2026-01-01 00:02:40.500107 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-01 00:02:40.500111 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-01 00:02:40.500115 | orchestrator | + description = "vrrp" 2026-01-01 00:02:40.500119 | orchestrator | + direction = "ingress" 2026-01-01 00:02:40.500122 | orchestrator | + ethertype = "IPv4" 2026-01-01 00:02:40.500126 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.500130 | orchestrator | + protocol = "112" 2026-01-01 00:02:40.500134 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.500138 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-01 00:02:40.500142 | orchestrator | + remote_group_id = (known after apply) 2026-01-01 00:02:40.500145 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-01 00:02:40.500149 | orchestrator | + security_group_id = (known after apply) 2026-01-01 00:02:40.500153 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.500157 | orchestrator | } 2026-01-01 00:02:40.500203 | orchestrator | 2026-01-01 00:02:40.500214 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-01 00:02:40.500219 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-01 00:02:40.500223 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.500226 | orchestrator | + description = "management security group" 2026-01-01 00:02:40.500230 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.500234 | orchestrator | + name = "testbed-management" 2026-01-01 00:02:40.500238 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.500241 | orchestrator | + stateful = (known after apply) 2026-01-01 00:02:40.500245 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.500249 | orchestrator | } 2026-01-01 00:02:40.500295 | orchestrator | 2026-01-01 00:02:40.500306 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-01 00:02:40.500310 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-01 00:02:40.500314 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.500318 | orchestrator | + description = "node security group" 2026-01-01 00:02:40.500322 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.500326 | orchestrator | + name = "testbed-node" 2026-01-01 00:02:40.500329 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.500333 | orchestrator | + stateful = (known after apply) 2026-01-01 00:02:40.500337 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.500341 | orchestrator | } 2026-01-01 00:02:40.500464 | orchestrator | 2026-01-01 00:02:40.500476 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-01 00:02:40.500480 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-01 00:02:40.500484 | orchestrator | + all_tags = (known after apply) 2026-01-01 00:02:40.500488 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-01 00:02:40.500491 | orchestrator | + dns_nameservers = [ 2026-01-01 00:02:40.500495 | orchestrator | + "8.8.8.8", 2026-01-01 00:02:40.500499 | orchestrator | + "9.9.9.9", 2026-01-01 00:02:40.500503 | orchestrator | ] 2026-01-01 00:02:40.500507 | orchestrator | + enable_dhcp = true 2026-01-01 00:02:40.500511 | orchestrator | + gateway_ip = (known after apply) 2026-01-01 00:02:40.500518 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.500522 | orchestrator | + ip_version = 4 2026-01-01 00:02:40.500526 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-01 00:02:40.500530 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-01 00:02:40.500534 | orchestrator | + name = "subnet-testbed-management" 2026-01-01 00:02:40.500537 | orchestrator | + network_id = (known after apply) 2026-01-01 00:02:40.500541 | orchestrator | + no_gateway = false 2026-01-01 00:02:40.500545 | orchestrator | + region = (known after apply) 2026-01-01 00:02:40.500549 | orchestrator | + service_types = (known after apply) 2026-01-01 00:02:40.500556 | orchestrator | + tenant_id = (known after apply) 2026-01-01 00:02:40.500560 | orchestrator | 2026-01-01 00:02:40.500564 | orchestrator | + allocation_pool { 2026-01-01 00:02:40.500568 | orchestrator | + end = "192.168.31.250" 2026-01-01 00:02:40.500572 | orchestrator | + start = "192.168.31.200" 2026-01-01 00:02:40.500575 | orchestrator | } 2026-01-01 00:02:40.500579 | orchestrator | } 2026-01-01 00:02:40.500611 | orchestrator | 2026-01-01 00:02:40.500622 | orchestrator | # terraform_data.image will be created 2026-01-01 00:02:40.500627 | orchestrator | + resource "terraform_data" "image" { 2026-01-01 00:02:40.500630 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.500634 | orchestrator | + input = "Ubuntu 24.04" 2026-01-01 00:02:40.500638 | orchestrator | + output = (known after apply) 2026-01-01 00:02:40.500642 | orchestrator | } 2026-01-01 00:02:40.500674 | orchestrator | 2026-01-01 00:02:40.500686 | orchestrator | # terraform_data.image_node will be created 2026-01-01 00:02:40.500690 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-01 00:02:40.500694 | orchestrator | + id = (known after apply) 2026-01-01 00:02:40.500698 | orchestrator | + input = "Ubuntu 24.04" 2026-01-01 00:02:40.500702 | orchestrator | + output = (known after apply) 2026-01-01 00:02:40.500705 | orchestrator | } 2026-01-01 00:02:40.500721 | orchestrator | 2026-01-01 00:02:40.500726 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-01 00:02:40.500737 | orchestrator | 2026-01-01 00:02:40.500741 | orchestrator | Changes to Outputs: 2026-01-01 00:02:40.500752 | orchestrator | + manager_address = (sensitive value) 2026-01-01 00:02:40.500756 | orchestrator | + private_key = (sensitive value) 2026-01-01 00:02:40.698671 | orchestrator | terraform_data.image_node: Creating... 2026-01-01 00:02:40.699708 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=65d49d92-5657-625c-d129-2024e4cadde6] 2026-01-01 00:02:40.699753 | orchestrator | terraform_data.image: Creating... 2026-01-01 00:02:40.699760 | orchestrator | terraform_data.image: Creation complete after 0s [id=49ddc5e0-1133-adfb-a820-aa461817d018] 2026-01-01 00:02:40.742074 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-01 00:02:40.744752 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-01 00:02:40.758102 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-01 00:02:40.758139 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-01 00:02:40.758154 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-01 00:02:40.758168 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-01 00:02:40.759727 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-01 00:02:40.759774 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-01 00:02:40.759780 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-01 00:02:40.761196 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-01 00:02:41.247821 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-01 00:02:41.265101 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-01 00:02:41.265199 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-01-01 00:02:41.268702 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-01 00:02:41.696690 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=3e96359a-acb4-4957-b42e-8f77999dbb36] 2026-01-01 00:02:41.698504 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-01 00:02:41.746721 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-01 00:02:41.752213 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-01 00:02:44.380100 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=24720f9e-f089-4ccc-8129-9c8809670a8e] 2026-01-01 00:02:44.396168 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=83035846-5651-49b4-8fb4-445ab40cb486] 2026-01-01 00:02:44.403972 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-01 00:02:44.418379 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=37c29c30-7f08-4e38-a8a3-d8f285ca48d1] 2026-01-01 00:02:44.427817 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-01 00:02:44.428227 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=c27ef0430562967e928a3cedbdc1abda1fb0ea4f] 2026-01-01 00:02:44.435357 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=831e5d56-835d-4e89-9dc9-0085220c39c0] 2026-01-01 00:02:44.437196 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=253c1c0170eaaf80d72da6449abc39cc40a89cd5] 2026-01-01 00:02:44.438695 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-01 00:02:44.438915 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-01 00:02:44.445939 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-01 00:02:44.446247 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=a7505c52-a0e0-4d49-8d34-7b67910eacfb] 2026-01-01 00:02:44.446861 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=9c7219fd-4a7f-4761-a2e7-de7bb29f84f0] 2026-01-01 00:02:44.449934 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-01 00:02:44.450485 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-01 00:02:44.451371 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-01 00:02:44.468162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec] 2026-01-01 00:02:44.481759 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-01 00:02:44.511854 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=586b5bdd-05f0-424a-894b-f7859a2e54f1] 2026-01-01 00:02:44.529383 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=b8d8b323-8d42-4427-9d99-f11bd160735d] 2026-01-01 00:02:45.158394 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=791616b9-95c6-4473-a500-83bd73db18ee] 2026-01-01 00:02:45.391410 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=b5d4d3c1-8596-4792-8ab7-90cce0a0cd9a] 2026-01-01 00:02:45.402682 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-01 00:02:47.880122 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=fb460ef8-794a-40a4-830a-8c8c7cea0001] 2026-01-01 00:02:47.889778 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=6e715ee9-bafc-489c-bf52-84e91a8fed44] 2026-01-01 00:02:47.912411 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91] 2026-01-01 00:02:47.925476 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=19893af0-ead3-467d-b949-06e8d6b388df] 2026-01-01 00:02:47.928604 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a] 2026-01-01 00:02:47.931713 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea] 2026-01-01 00:02:48.843565 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=1b71d5b7-8a7d-49bf-b550-1b17a164b24f] 2026-01-01 00:02:48.851516 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-01 00:02:48.851621 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-01 00:02:48.851637 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-01 00:02:49.067197 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=959af041-34ef-4023-be20-71dfb8c17860] 2026-01-01 00:02:49.067858 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=13066154-102c-407f-b7de-835e637131da] 2026-01-01 00:02:49.083588 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-01 00:02:49.084016 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-01 00:02:49.084783 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-01 00:02:49.085084 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-01 00:02:49.085466 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-01 00:02:49.085584 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-01 00:02:49.088501 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-01 00:02:49.090541 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-01 00:02:49.098467 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-01 00:02:49.277882 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=d0831054-062b-4f0d-bf3e-b9540702c780] 2026-01-01 00:02:49.289100 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-01 00:02:49.303688 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=7e547176-1cd1-4459-ac4d-32e01216adfc] 2026-01-01 00:02:49.317782 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-01 00:02:49.471632 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=90694cc5-cdea-4b8b-b8d4-f8c0193f8eef] 2026-01-01 00:02:49.483963 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-01 00:02:49.563067 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=79ad87a7-0841-47e7-ac4b-80f62869c049] 2026-01-01 00:02:49.578213 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-01 00:02:49.624557 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=683b618a-e566-49c2-9bac-ef86983435ba] 2026-01-01 00:02:49.630939 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-01 00:02:49.728191 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=21ff0481-9825-426d-99e9-308c718a018f] 2026-01-01 00:02:49.734884 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-01 00:02:49.816885 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=7d381aa1-7178-41e5-b36b-cbaaa4034a40] 2026-01-01 00:02:49.828593 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-01 00:02:49.835360 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=b4552696-995b-4e0d-9655-7efcc471d34f] 2026-01-01 00:02:49.919900 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=92135b66-69ae-4c62-bde3-9606ae0df9ae] 2026-01-01 00:02:50.046159 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=742bb070-bee8-4b57-ba3d-9f6cae3370a9] 2026-01-01 00:02:50.350367 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=1e5844ab-2096-4950-a3ed-61b0042a41e3] 2026-01-01 00:02:50.624615 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=5a60e7ee-ac97-46b0-ab1c-d3f9ae52c40f] 2026-01-01 00:02:50.825496 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=e465e469-574b-4def-9ffa-282683733d9b] 2026-01-01 00:02:50.940791 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=be1d19ae-4289-4349-9aa2-1cb962c126d7] 2026-01-01 00:02:50.947378 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3cac5394-62bd-46f6-a690-6443c228dc88] 2026-01-01 00:02:51.270506 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=20c673b6-53e2-4d65-854e-d4d906a5d9db] 2026-01-01 00:02:51.298358 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-01 00:02:51.299889 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-01 00:02:51.310291 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-01 00:02:51.310616 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-01 00:02:51.310724 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-01 00:02:51.324854 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=efb1cb86-402f-4bd4-bbd8-b87e43c1dd42] 2026-01-01 00:02:51.326101 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-01 00:02:51.327441 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-01 00:02:54.490795 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=14d2be8b-cd61-4c2f-b88f-25c30ce5f03c] 2026-01-01 00:02:54.499671 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-01 00:02:54.505791 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-01 00:02:54.509131 | orchestrator | local_file.inventory: Creating... 2026-01-01 00:02:54.510827 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=03199b2ff5833aecd4fe8b0e9d27ef0f4e9dde2c] 2026-01-01 00:02:54.517887 | orchestrator | local_file.inventory: Creation complete after 0s [id=1fad75cb9f36c0d9afcadaab597c55c5c20f9532] 2026-01-01 00:02:56.619809 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 3s [id=14d2be8b-cd61-4c2f-b88f-25c30ce5f03c] 2026-01-01 00:03:01.304996 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-01 00:03:01.311177 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-01 00:03:01.311249 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-01 00:03:01.311261 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-01 00:03:01.327306 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-01 00:03:01.328463 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-01 00:03:11.312675 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-01 00:03:11.312813 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-01 00:03:11.312828 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-01 00:03:11.312840 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-01 00:03:11.327948 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-01 00:03:11.329059 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-01 00:03:21.320809 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-01 00:03:21.320922 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-01 00:03:21.320938 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-01 00:03:21.320945 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-01 00:03:21.329112 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-01 00:03:21.329193 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-01 00:03:21.946920 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=043ae641-2da9-4653-89c9-a8308f7409f5] 2026-01-01 00:03:31.326265 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-01 00:03:31.326369 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-01 00:03:31.326393 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-01 00:03:31.326405 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-01 00:03:31.329469 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-01 00:03:32.291618 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=f0d40301-28bf-4ae2-8d36-fe84ea889e3d] 2026-01-01 00:03:32.856203 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=daf49175-b96d-4ce8-b1c2-3967d74b8846] 2026-01-01 00:03:32.887935 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 42s [id=6c0e7109-3a30-4e63-bdb1-1b4f6789b4bf] 2026-01-01 00:03:33.015014 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=f4936add-5a9d-453a-a48a-6bb19ddfaf06] 2026-01-01 00:03:33.285226 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 42s [id=3409e738-6aac-455e-b23a-e970da3fdd28] 2026-01-01 00:03:33.309359 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-01 00:03:33.319242 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4623116198958109266] 2026-01-01 00:03:33.325044 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-01 00:03:33.330909 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-01 00:03:33.341075 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-01 00:03:33.343336 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-01 00:03:33.359071 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-01 00:03:33.373322 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-01 00:03:33.373941 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-01 00:03:33.375785 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-01 00:03:33.387418 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-01 00:03:33.394228 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-01 00:03:36.815872 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=6c0e7109-3a30-4e63-bdb1-1b4f6789b4bf/24720f9e-f089-4ccc-8129-9c8809670a8e] 2026-01-01 00:03:36.844959 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=3409e738-6aac-455e-b23a-e970da3fdd28/a7505c52-a0e0-4d49-8d34-7b67910eacfb] 2026-01-01 00:03:36.853951 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=f4936add-5a9d-453a-a48a-6bb19ddfaf06/37c29c30-7f08-4e38-a8a3-d8f285ca48d1] 2026-01-01 00:03:36.881076 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=6c0e7109-3a30-4e63-bdb1-1b4f6789b4bf/586b5bdd-05f0-424a-894b-f7859a2e54f1] 2026-01-01 00:03:36.906983 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=3409e738-6aac-455e-b23a-e970da3fdd28/831e5d56-835d-4e89-9dc9-0085220c39c0] 2026-01-01 00:03:36.913031 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=f4936add-5a9d-453a-a48a-6bb19ddfaf06/83035846-5651-49b4-8fb4-445ab40cb486] 2026-01-01 00:03:42.982222 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=6c0e7109-3a30-4e63-bdb1-1b4f6789b4bf/9c7219fd-4a7f-4761-a2e7-de7bb29f84f0] 2026-01-01 00:03:42.997568 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=3409e738-6aac-455e-b23a-e970da3fdd28/b8d8b323-8d42-4427-9d99-f11bd160735d] 2026-01-01 00:03:43.005821 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=f4936add-5a9d-453a-a48a-6bb19ddfaf06/144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec] 2026-01-01 00:03:43.394847 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-01 00:03:53.404521 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-01 00:03:53.807897 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=767cc5e3-b030-4d7b-85dc-208160b58356] 2026-01-01 00:03:53.835716 | orchestrator | 2026-01-01 00:03:53.835854 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-01 00:03:53.835878 | orchestrator | 2026-01-01 00:03:53.835896 | orchestrator | Outputs: 2026-01-01 00:03:53.835915 | orchestrator | 2026-01-01 00:03:53.835933 | orchestrator | manager_address = 2026-01-01 00:03:53.835950 | orchestrator | private_key = 2026-01-01 00:03:54.044125 | orchestrator | ok: Runtime: 0:01:18.572312 2026-01-01 00:03:54.073290 | 2026-01-01 00:03:54.073434 | TASK [Create infrastructure (stable)] 2026-01-01 00:03:54.614032 | orchestrator | skipping: Conditional result was False 2026-01-01 00:03:54.632555 | 2026-01-01 00:03:54.632735 | TASK [Fetch manager address] 2026-01-01 00:03:55.147789 | orchestrator | ok 2026-01-01 00:03:55.156725 | 2026-01-01 00:03:55.156854 | TASK [Set manager_host address] 2026-01-01 00:03:55.267580 | orchestrator | ok 2026-01-01 00:03:55.277609 | 2026-01-01 00:03:55.277745 | LOOP [Update ansible collections] 2026-01-01 00:03:57.058194 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:03:57.058502 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-01 00:03:57.058578 | orchestrator | Starting galaxy collection install process 2026-01-01 00:03:57.058616 | orchestrator | Process install dependency map 2026-01-01 00:03:57.058646 | orchestrator | Starting collection install process 2026-01-01 00:03:57.058672 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-01-01 00:03:57.058703 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-01-01 00:03:57.058746 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-01 00:03:57.058828 | orchestrator | ok: Item: commons Runtime: 0:00:01.438140 2026-01-01 00:03:58.669887 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-01 00:03:58.670082 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:03:58.670189 | orchestrator | Starting galaxy collection install process 2026-01-01 00:03:58.670234 | orchestrator | Process install dependency map 2026-01-01 00:03:58.670278 | orchestrator | Starting collection install process 2026-01-01 00:03:58.670318 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-01-01 00:03:58.670360 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-01-01 00:03:58.670392 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-01 00:03:58.670444 | orchestrator | ok: Item: services Runtime: 0:00:01.342452 2026-01-01 00:03:58.692646 | 2026-01-01 00:03:58.692854 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-01 00:04:09.301257 | orchestrator | ok 2026-01-01 00:04:09.311964 | 2026-01-01 00:04:09.312093 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-01 00:05:09.359065 | orchestrator | ok 2026-01-01 00:05:09.371529 | 2026-01-01 00:05:09.371709 | TASK [Fetch manager ssh hostkey] 2026-01-01 00:05:10.947904 | orchestrator | Output suppressed because no_log was given 2026-01-01 00:05:10.966338 | 2026-01-01 00:05:10.966652 | TASK [Get ssh keypair from terraform environment] 2026-01-01 00:05:11.517672 | orchestrator | ok: Runtime: 0:00:00.010946 2026-01-01 00:05:11.534069 | 2026-01-01 00:05:11.534315 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-01 00:05:11.584686 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-01 00:05:11.595424 | 2026-01-01 00:05:11.595573 | TASK [Run manager part 0] 2026-01-01 00:05:12.568805 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:05:12.630954 | orchestrator | 2026-01-01 00:05:12.631008 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-01 00:05:12.631015 | orchestrator | 2026-01-01 00:05:12.631028 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-01 00:05:14.517012 | orchestrator | ok: [testbed-manager] 2026-01-01 00:05:14.517221 | orchestrator | 2026-01-01 00:05:14.517248 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-01 00:05:14.517259 | orchestrator | 2026-01-01 00:05:14.517269 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:05:16.608951 | orchestrator | ok: [testbed-manager] 2026-01-01 00:05:16.609029 | orchestrator | 2026-01-01 00:05:16.609037 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-01 00:05:17.316141 | orchestrator | ok: [testbed-manager] 2026-01-01 00:05:17.316206 | orchestrator | 2026-01-01 00:05:17.316220 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-01 00:05:17.373233 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.373288 | orchestrator | 2026-01-01 00:05:17.373300 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-01 00:05:17.399396 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.399468 | orchestrator | 2026-01-01 00:05:17.399476 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-01 00:05:17.434052 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.434102 | orchestrator | 2026-01-01 00:05:17.434109 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-01 00:05:17.470620 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.470714 | orchestrator | 2026-01-01 00:05:17.470730 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-01 00:05:17.506958 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.507005 | orchestrator | 2026-01-01 00:05:17.507012 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-01 00:05:17.542888 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.542946 | orchestrator | 2026-01-01 00:05:17.542956 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-01 00:05:17.576768 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:05:17.576823 | orchestrator | 2026-01-01 00:05:17.576832 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-01 00:05:18.412947 | orchestrator | changed: [testbed-manager] 2026-01-01 00:05:18.413000 | orchestrator | 2026-01-01 00:05:18.413006 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-01 00:08:04.487483 | orchestrator | changed: [testbed-manager] 2026-01-01 00:08:04.487629 | orchestrator | 2026-01-01 00:08:04.487652 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-01 00:09:26.540994 | orchestrator | changed: [testbed-manager] 2026-01-01 00:09:26.541066 | orchestrator | 2026-01-01 00:09:26.541076 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-01 00:09:52.738486 | orchestrator | changed: [testbed-manager] 2026-01-01 00:09:52.738542 | orchestrator | 2026-01-01 00:09:52.738553 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-01 00:10:02.503527 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:02.503574 | orchestrator | 2026-01-01 00:10:02.503582 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-01 00:10:02.543472 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:02.543513 | orchestrator | 2026-01-01 00:10:02.543521 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-01 00:10:03.413754 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:03.413795 | orchestrator | 2026-01-01 00:10:03.413804 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-01 00:10:04.188467 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:04.188551 | orchestrator | 2026-01-01 00:10:04.188566 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-01 00:10:11.314672 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:11.314741 | orchestrator | 2026-01-01 00:10:11.314771 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-01 00:10:17.614132 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:17.614210 | orchestrator | 2026-01-01 00:10:17.614226 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-01 00:10:20.374391 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:20.374440 | orchestrator | 2026-01-01 00:10:20.374448 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-01 00:10:22.245941 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:22.245989 | orchestrator | 2026-01-01 00:10:22.246000 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-01 00:10:23.467943 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-01 00:10:23.468042 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-01 00:10:23.468058 | orchestrator | 2026-01-01 00:10:23.468070 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-01 00:10:23.514872 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-01 00:10:23.514924 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-01 00:10:23.514930 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-01 00:10:23.514935 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-01 00:10:30.215233 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-01 00:10:30.215274 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-01 00:10:30.215280 | orchestrator | 2026-01-01 00:10:30.215285 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-01 00:10:30.820141 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:30.820186 | orchestrator | 2026-01-01 00:10:30.820194 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-01 00:10:52.042171 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-01 00:10:52.042227 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-01 00:10:52.042235 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-01 00:10:52.042240 | orchestrator | 2026-01-01 00:10:52.042245 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-01 00:10:54.482642 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-01 00:10:54.482724 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-01 00:10:54.482741 | orchestrator | 2026-01-01 00:10:54.482755 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-01 00:10:54.482769 | orchestrator | 2026-01-01 00:10:54.482782 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:10:55.921815 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:55.921867 | orchestrator | 2026-01-01 00:10:55.921880 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-01 00:10:55.966066 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:55.966132 | orchestrator | 2026-01-01 00:10:55.966147 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-01 00:10:56.050832 | orchestrator | ok: [testbed-manager] 2026-01-01 00:10:56.050900 | orchestrator | 2026-01-01 00:10:56.050916 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-01 00:10:56.874111 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:56.874154 | orchestrator | 2026-01-01 00:10:56.874162 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-01 00:10:57.637877 | orchestrator | changed: [testbed-manager] 2026-01-01 00:10:57.637937 | orchestrator | 2026-01-01 00:10:57.637950 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-01 00:10:59.040517 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-01 00:10:59.040615 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-01 00:10:59.040632 | orchestrator | 2026-01-01 00:10:59.040660 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-01 00:11:00.490000 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:00.490176 | orchestrator | 2026-01-01 00:11:00.490195 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-01 00:11:02.344637 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:11:02.344735 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-01 00:11:02.344752 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:11:02.344764 | orchestrator | 2026-01-01 00:11:02.344778 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-01 00:11:02.402547 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:02.402647 | orchestrator | 2026-01-01 00:11:02.402666 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-01 00:11:02.468388 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:02.468485 | orchestrator | 2026-01-01 00:11:02.468505 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-01 00:11:03.081472 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:03.081512 | orchestrator | 2026-01-01 00:11:03.081520 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-01 00:11:03.154548 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:03.154600 | orchestrator | 2026-01-01 00:11:03.154613 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-01 00:11:04.044197 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:11:04.044239 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:04.044247 | orchestrator | 2026-01-01 00:11:04.044253 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-01 00:11:04.079880 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:04.079925 | orchestrator | 2026-01-01 00:11:04.079934 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-01 00:11:04.110254 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:04.110340 | orchestrator | 2026-01-01 00:11:04.110351 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-01 00:11:04.150788 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:04.150862 | orchestrator | 2026-01-01 00:11:04.150879 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-01 00:11:04.227220 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:04.227297 | orchestrator | 2026-01-01 00:11:04.227309 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-01 00:11:04.953295 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:04.953949 | orchestrator | 2026-01-01 00:11:04.953973 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-01 00:11:04.953986 | orchestrator | 2026-01-01 00:11:04.953999 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:11:06.388721 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:06.388810 | orchestrator | 2026-01-01 00:11:06.388827 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-01 00:11:07.375403 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:07.375493 | orchestrator | 2026-01-01 00:11:07.375509 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:11:07.375523 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-01 00:11:07.375535 | orchestrator | 2026-01-01 00:11:07.851386 | orchestrator | ok: Runtime: 0:05:55.545331 2026-01-01 00:11:07.871783 | 2026-01-01 00:11:07.871970 | TASK [Point out that the log in on the manager is now possible] 2026-01-01 00:11:07.924950 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-01 00:11:07.935861 | 2026-01-01 00:11:07.935999 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-01 00:11:07.987678 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-01 00:11:07.996735 | 2026-01-01 00:11:07.996864 | TASK [Run manager part 1 + 2] 2026-01-01 00:11:10.362764 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-01 00:11:10.421306 | orchestrator | 2026-01-01 00:11:10.421379 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-01 00:11:10.421391 | orchestrator | 2026-01-01 00:11:10.421410 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:11:13.075506 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:13.075566 | orchestrator | 2026-01-01 00:11:13.075591 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-01 00:11:13.111467 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:13.111515 | orchestrator | 2026-01-01 00:11:13.111524 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-01 00:11:13.149990 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:13.150076 | orchestrator | 2026-01-01 00:11:13.150088 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-01 00:11:13.184980 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:13.185034 | orchestrator | 2026-01-01 00:11:13.185041 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-01 00:11:13.261967 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:13.262061 | orchestrator | 2026-01-01 00:11:13.262073 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-01 00:11:13.315243 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:13.315316 | orchestrator | 2026-01-01 00:11:13.315325 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-01 00:11:13.366942 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-01 00:11:13.366992 | orchestrator | 2026-01-01 00:11:13.366998 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-01 00:11:14.072904 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:14.072960 | orchestrator | 2026-01-01 00:11:14.072969 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-01 00:11:14.118681 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:14.118736 | orchestrator | 2026-01-01 00:11:14.118741 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-01 00:11:15.525199 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:15.525260 | orchestrator | 2026-01-01 00:11:15.525324 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-01 00:11:16.123982 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:16.124037 | orchestrator | 2026-01-01 00:11:16.124043 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-01 00:11:17.288539 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:17.288600 | orchestrator | 2026-01-01 00:11:17.288610 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-01 00:11:32.691955 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:32.692056 | orchestrator | 2026-01-01 00:11:32.692073 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-01 00:11:33.393349 | orchestrator | ok: [testbed-manager] 2026-01-01 00:11:33.393443 | orchestrator | 2026-01-01 00:11:33.393461 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-01 00:11:33.443924 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:33.443981 | orchestrator | 2026-01-01 00:11:33.443989 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-01 00:11:34.413013 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:34.413103 | orchestrator | 2026-01-01 00:11:34.413130 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-01 00:11:35.422288 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:35.422336 | orchestrator | 2026-01-01 00:11:35.422344 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-01 00:11:35.992456 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:35.992504 | orchestrator | 2026-01-01 00:11:35.992513 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-01 00:11:36.032114 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-01 00:11:36.032249 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-01 00:11:36.032378 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-01 00:11:36.032394 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-01 00:11:38.821042 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:38.821143 | orchestrator | 2026-01-01 00:11:38.821160 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-01 00:11:48.044799 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-01 00:11:48.044950 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-01 00:11:48.044965 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-01 00:11:48.044974 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-01 00:11:48.044987 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-01 00:11:48.044995 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-01 00:11:48.045003 | orchestrator | 2026-01-01 00:11:48.045011 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-01 00:11:49.144876 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:49.144919 | orchestrator | 2026-01-01 00:11:49.144927 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-01 00:11:49.191082 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:49.191123 | orchestrator | 2026-01-01 00:11:49.191130 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-01 00:11:52.455995 | orchestrator | changed: [testbed-manager] 2026-01-01 00:11:52.456040 | orchestrator | 2026-01-01 00:11:52.456048 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-01 00:11:52.493769 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:11:52.493807 | orchestrator | 2026-01-01 00:11:52.493814 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-01 00:13:36.500509 | orchestrator | changed: [testbed-manager] 2026-01-01 00:13:36.500564 | orchestrator | 2026-01-01 00:13:36.500575 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-01 00:13:37.715927 | orchestrator | ok: [testbed-manager] 2026-01-01 00:13:37.715971 | orchestrator | 2026-01-01 00:13:37.715979 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:13:37.715986 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-01 00:13:37.715992 | orchestrator | 2026-01-01 00:13:38.126305 | orchestrator | ok: Runtime: 0:02:29.516893 2026-01-01 00:13:38.144896 | 2026-01-01 00:13:38.145077 | TASK [Reboot manager] 2026-01-01 00:13:39.687710 | orchestrator | ok: Runtime: 0:00:00.979564 2026-01-01 00:13:39.705171 | 2026-01-01 00:13:39.705341 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-01 00:13:57.527390 | orchestrator | ok 2026-01-01 00:13:57.540333 | 2026-01-01 00:13:57.540477 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-01 00:14:57.597100 | orchestrator | ok 2026-01-01 00:14:57.607343 | 2026-01-01 00:14:57.607490 | TASK [Deploy manager + bootstrap nodes] 2026-01-01 00:15:00.362258 | orchestrator | 2026-01-01 00:15:00.362484 | orchestrator | # DEPLOY MANAGER 2026-01-01 00:15:00.362508 | orchestrator | 2026-01-01 00:15:00.362522 | orchestrator | + set -e 2026-01-01 00:15:00.362535 | orchestrator | + echo 2026-01-01 00:15:00.362550 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-01 00:15:00.362568 | orchestrator | + echo 2026-01-01 00:15:00.362622 | orchestrator | + cat /opt/manager-vars.sh 2026-01-01 00:15:00.365935 | orchestrator | export NUMBER_OF_NODES=6 2026-01-01 00:15:00.365994 | orchestrator | 2026-01-01 00:15:00.366008 | orchestrator | export CEPH_VERSION=reef 2026-01-01 00:15:00.366070 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-01 00:15:00.366106 | orchestrator | export MANAGER_VERSION=latest 2026-01-01 00:15:00.366133 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-01 00:15:00.366144 | orchestrator | 2026-01-01 00:15:00.366163 | orchestrator | export ARA=false 2026-01-01 00:15:00.366176 | orchestrator | export DEPLOY_MODE=manager 2026-01-01 00:15:00.366194 | orchestrator | export TEMPEST=true 2026-01-01 00:15:00.366206 | orchestrator | export IS_ZUUL=true 2026-01-01 00:15:00.366217 | orchestrator | 2026-01-01 00:15:00.366235 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:15:00.366247 | orchestrator | export EXTERNAL_API=false 2026-01-01 00:15:00.366259 | orchestrator | 2026-01-01 00:15:00.366269 | orchestrator | export IMAGE_USER=ubuntu 2026-01-01 00:15:00.366285 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-01 00:15:00.366296 | orchestrator | 2026-01-01 00:15:00.366307 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-01 00:15:00.366329 | orchestrator | 2026-01-01 00:15:00.366341 | orchestrator | + echo 2026-01-01 00:15:00.366354 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-01 00:15:00.367235 | orchestrator | ++ export INTERACTIVE=false 2026-01-01 00:15:00.367270 | orchestrator | ++ INTERACTIVE=false 2026-01-01 00:15:00.367285 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-01 00:15:00.367298 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-01 00:15:00.367407 | orchestrator | + source /opt/manager-vars.sh 2026-01-01 00:15:00.367422 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-01 00:15:00.367434 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-01 00:15:00.367449 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-01 00:15:00.367460 | orchestrator | ++ CEPH_VERSION=reef 2026-01-01 00:15:00.367472 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-01 00:15:00.367483 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-01 00:15:00.367498 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-01 00:15:00.367509 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-01 00:15:00.367520 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-01 00:15:00.367544 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-01 00:15:00.367556 | orchestrator | ++ export ARA=false 2026-01-01 00:15:00.367570 | orchestrator | ++ ARA=false 2026-01-01 00:15:00.367582 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-01 00:15:00.367593 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-01 00:15:00.367604 | orchestrator | ++ export TEMPEST=true 2026-01-01 00:15:00.367615 | orchestrator | ++ TEMPEST=true 2026-01-01 00:15:00.367626 | orchestrator | ++ export IS_ZUUL=true 2026-01-01 00:15:00.367636 | orchestrator | ++ IS_ZUUL=true 2026-01-01 00:15:00.367648 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:15:00.367659 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:15:00.367674 | orchestrator | ++ export EXTERNAL_API=false 2026-01-01 00:15:00.367685 | orchestrator | ++ EXTERNAL_API=false 2026-01-01 00:15:00.367696 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-01 00:15:00.367707 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-01 00:15:00.367718 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-01 00:15:00.367729 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-01 00:15:00.367743 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-01 00:15:00.367755 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-01 00:15:00.367813 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-01 00:15:00.433733 | orchestrator | + docker version 2026-01-01 00:15:00.727895 | orchestrator | Client: Docker Engine - Community 2026-01-01 00:15:00.728028 | orchestrator | Version: 27.5.1 2026-01-01 00:15:00.728043 | orchestrator | API version: 1.47 2026-01-01 00:15:00.728056 | orchestrator | Go version: go1.22.11 2026-01-01 00:15:00.728066 | orchestrator | Git commit: 9f9e405 2026-01-01 00:15:00.728096 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-01 00:15:00.728109 | orchestrator | OS/Arch: linux/amd64 2026-01-01 00:15:00.728119 | orchestrator | Context: default 2026-01-01 00:15:00.728129 | orchestrator | 2026-01-01 00:15:00.728140 | orchestrator | Server: Docker Engine - Community 2026-01-01 00:15:00.728150 | orchestrator | Engine: 2026-01-01 00:15:00.728160 | orchestrator | Version: 27.5.1 2026-01-01 00:15:00.728171 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-01 00:15:00.728229 | orchestrator | Go version: go1.22.11 2026-01-01 00:15:00.728240 | orchestrator | Git commit: 4c9b3b0 2026-01-01 00:15:00.728250 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-01 00:15:00.728260 | orchestrator | OS/Arch: linux/amd64 2026-01-01 00:15:00.728270 | orchestrator | Experimental: false 2026-01-01 00:15:00.728280 | orchestrator | containerd: 2026-01-01 00:15:00.728289 | orchestrator | Version: v2.2.1 2026-01-01 00:15:00.728300 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-01 00:15:00.728311 | orchestrator | runc: 2026-01-01 00:15:00.728321 | orchestrator | Version: 1.3.4 2026-01-01 00:15:00.728331 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-01 00:15:00.728341 | orchestrator | docker-init: 2026-01-01 00:15:00.728351 | orchestrator | Version: 0.19.0 2026-01-01 00:15:00.728362 | orchestrator | GitCommit: de40ad0 2026-01-01 00:15:00.732123 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-01 00:15:00.739581 | orchestrator | + set -e 2026-01-01 00:15:00.739664 | orchestrator | + source /opt/manager-vars.sh 2026-01-01 00:15:00.739674 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-01 00:15:00.739685 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-01 00:15:00.739693 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-01 00:15:00.739701 | orchestrator | ++ CEPH_VERSION=reef 2026-01-01 00:15:00.739709 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-01 00:15:00.739718 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-01 00:15:00.739726 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-01 00:15:00.739734 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-01 00:15:00.739742 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-01 00:15:00.739750 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-01 00:15:00.739758 | orchestrator | ++ export ARA=false 2026-01-01 00:15:00.739766 | orchestrator | ++ ARA=false 2026-01-01 00:15:00.739774 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-01 00:15:00.739783 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-01 00:15:00.739801 | orchestrator | ++ export TEMPEST=true 2026-01-01 00:15:00.739809 | orchestrator | ++ TEMPEST=true 2026-01-01 00:15:00.739817 | orchestrator | ++ export IS_ZUUL=true 2026-01-01 00:15:00.739825 | orchestrator | ++ IS_ZUUL=true 2026-01-01 00:15:00.739833 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:15:00.739841 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:15:00.739850 | orchestrator | ++ export EXTERNAL_API=false 2026-01-01 00:15:00.739857 | orchestrator | ++ EXTERNAL_API=false 2026-01-01 00:15:00.739865 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-01 00:15:00.739873 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-01 00:15:00.739881 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-01 00:15:00.739889 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-01 00:15:00.739898 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-01 00:15:00.739906 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-01 00:15:00.739914 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-01 00:15:00.739922 | orchestrator | ++ export INTERACTIVE=false 2026-01-01 00:15:00.739930 | orchestrator | ++ INTERACTIVE=false 2026-01-01 00:15:00.739938 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-01 00:15:00.739950 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-01 00:15:00.739961 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-01 00:15:00.739970 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:15:00.739978 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-01 00:15:00.746072 | orchestrator | + set -e 2026-01-01 00:15:00.746135 | orchestrator | + VERSION=reef 2026-01-01 00:15:00.747055 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:15:00.751721 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-01 00:15:00.751748 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:15:00.757418 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-01-01 00:15:00.764938 | orchestrator | + set -e 2026-01-01 00:15:00.764959 | orchestrator | + VERSION=2024.2 2026-01-01 00:15:00.765827 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:15:00.769810 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-01 00:15:00.769850 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-01-01 00:15:00.775557 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-01 00:15:00.776346 | orchestrator | ++ semver latest 7.0.0 2026-01-01 00:15:00.846699 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:15:00.846806 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:15:00.846824 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-01 00:15:00.847735 | orchestrator | ++ semver latest 10.0.0-0 2026-01-01 00:15:00.907393 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:15:00.907785 | orchestrator | ++ semver 2024.2 2025.1 2026-01-01 00:15:00.970760 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:15:00.970848 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-01 00:15:01.073509 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-01 00:15:01.074459 | orchestrator | + source /opt/venv/bin/activate 2026-01-01 00:15:01.076118 | orchestrator | ++ deactivate nondestructive 2026-01-01 00:15:01.076186 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:15:01.076201 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:15:01.076212 | orchestrator | ++ hash -r 2026-01-01 00:15:01.076224 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:15:01.076235 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-01 00:15:01.076246 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-01 00:15:01.076260 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-01 00:15:01.076272 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-01 00:15:01.076283 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-01 00:15:01.076294 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-01 00:15:01.076315 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-01 00:15:01.076327 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-01 00:15:01.076339 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-01 00:15:01.076351 | orchestrator | ++ export PATH 2026-01-01 00:15:01.076362 | orchestrator | ++ '[' -n '' ']' 2026-01-01 00:15:01.076373 | orchestrator | ++ '[' -z '' ']' 2026-01-01 00:15:01.076384 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-01 00:15:01.076395 | orchestrator | ++ PS1='(venv) ' 2026-01-01 00:15:01.076405 | orchestrator | ++ export PS1 2026-01-01 00:15:01.076416 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-01 00:15:01.076428 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-01 00:15:01.076439 | orchestrator | ++ hash -r 2026-01-01 00:15:01.076673 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-01 00:15:02.552268 | orchestrator | 2026-01-01 00:15:02.552398 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-01 00:15:02.552416 | orchestrator | 2026-01-01 00:15:02.552428 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-01 00:15:03.159717 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:03.159818 | orchestrator | 2026-01-01 00:15:03.159834 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-01 00:15:04.190426 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:04.190558 | orchestrator | 2026-01-01 00:15:04.190577 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-01 00:15:04.190593 | orchestrator | 2026-01-01 00:15:04.190604 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:15:06.760101 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:06.760197 | orchestrator | 2026-01-01 00:15:06.760213 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-01 00:15:06.828953 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:06.829008 | orchestrator | 2026-01-01 00:15:06.829016 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-01 00:15:07.329287 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:07.329382 | orchestrator | 2026-01-01 00:15:07.329410 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-01 00:15:07.376821 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:07.376894 | orchestrator | 2026-01-01 00:15:07.376908 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-01 00:15:07.753814 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:07.753928 | orchestrator | 2026-01-01 00:15:07.753955 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-01 00:15:07.811610 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:07.811703 | orchestrator | 2026-01-01 00:15:07.811719 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-01 00:15:08.168298 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:08.168396 | orchestrator | 2026-01-01 00:15:08.168411 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-01 00:15:08.299929 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:08.300853 | orchestrator | 2026-01-01 00:15:08.300910 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-01 00:15:08.300932 | orchestrator | 2026-01-01 00:15:08.300950 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:15:10.106587 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:10.106857 | orchestrator | 2026-01-01 00:15:10.106875 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-01 00:15:10.216296 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-01 00:15:10.216413 | orchestrator | 2026-01-01 00:15:10.216431 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-01 00:15:10.290967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-01 00:15:10.291041 | orchestrator | 2026-01-01 00:15:10.291052 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-01 00:15:11.424099 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-01 00:15:11.424187 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-01 00:15:11.424202 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-01 00:15:11.424214 | orchestrator | 2026-01-01 00:15:11.424226 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-01 00:15:13.323993 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-01 00:15:13.324103 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-01 00:15:13.324119 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-01 00:15:13.324134 | orchestrator | 2026-01-01 00:15:13.324147 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-01 00:15:13.987527 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:15:13.987591 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:13.987597 | orchestrator | 2026-01-01 00:15:13.987602 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-01 00:15:14.635941 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:15:14.636036 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:14.636051 | orchestrator | 2026-01-01 00:15:14.636062 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-01 00:15:14.700311 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:14.700346 | orchestrator | 2026-01-01 00:15:14.700351 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-01 00:15:15.081780 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:15.081828 | orchestrator | 2026-01-01 00:15:15.081834 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-01 00:15:15.163304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-01 00:15:15.163336 | orchestrator | 2026-01-01 00:15:15.163341 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-01 00:15:16.302900 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:16.302970 | orchestrator | 2026-01-01 00:15:16.302980 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-01 00:15:17.130122 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:17.130245 | orchestrator | 2026-01-01 00:15:17.130262 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-01 00:15:28.473191 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:28.473301 | orchestrator | 2026-01-01 00:15:28.473318 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-01 00:15:28.520840 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:28.520914 | orchestrator | 2026-01-01 00:15:28.520929 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-01 00:15:28.520978 | orchestrator | 2026-01-01 00:15:28.520990 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:15:30.372654 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:30.372772 | orchestrator | 2026-01-01 00:15:30.372789 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-01 00:15:30.523911 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-01 00:15:30.524026 | orchestrator | 2026-01-01 00:15:30.524042 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-01 00:15:30.581430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:15:30.581532 | orchestrator | 2026-01-01 00:15:30.581546 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-01 00:15:33.337610 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:33.337768 | orchestrator | 2026-01-01 00:15:33.337786 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-01 00:15:33.396163 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:33.396278 | orchestrator | 2026-01-01 00:15:33.396294 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-01 00:15:33.528161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-01 00:15:33.528305 | orchestrator | 2026-01-01 00:15:33.528333 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-01 00:15:36.640279 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-01 00:15:36.640400 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-01 00:15:36.640416 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-01 00:15:36.640429 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-01 00:15:36.640441 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-01 00:15:36.640454 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-01 00:15:36.640465 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-01 00:15:36.640477 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-01 00:15:36.640488 | orchestrator | 2026-01-01 00:15:36.640501 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-01 00:15:37.321191 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:37.321354 | orchestrator | 2026-01-01 00:15:37.321375 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-01 00:15:37.998349 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:37.998453 | orchestrator | 2026-01-01 00:15:37.998470 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-01 00:15:38.074467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-01 00:15:38.074566 | orchestrator | 2026-01-01 00:15:38.074580 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-01 00:15:39.328909 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-01 00:15:39.329034 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-01 00:15:39.329093 | orchestrator | 2026-01-01 00:15:39.329108 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-01 00:15:39.991110 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:39.991219 | orchestrator | 2026-01-01 00:15:39.991235 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-01 00:15:40.050191 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:40.050293 | orchestrator | 2026-01-01 00:15:40.050310 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-01 00:15:40.137811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-01 00:15:40.137908 | orchestrator | 2026-01-01 00:15:40.137923 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-01 00:15:40.813602 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:40.813718 | orchestrator | 2026-01-01 00:15:40.813775 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-01 00:15:40.872118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-01 00:15:40.872222 | orchestrator | 2026-01-01 00:15:40.872237 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-01 00:15:42.323436 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:15:42.323578 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:15:42.323594 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:42.323608 | orchestrator | 2026-01-01 00:15:42.324381 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-01 00:15:42.995330 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:42.995449 | orchestrator | 2026-01-01 00:15:42.995465 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-01 00:15:43.053553 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:43.053684 | orchestrator | 2026-01-01 00:15:43.053707 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-01 00:15:43.146893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-01 00:15:43.147014 | orchestrator | 2026-01-01 00:15:43.147113 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-01 00:15:43.720379 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:43.720521 | orchestrator | 2026-01-01 00:15:43.720547 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-01 00:15:44.158767 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:44.158897 | orchestrator | 2026-01-01 00:15:44.158922 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-01 00:15:45.415115 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-01 00:15:45.415236 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-01 00:15:45.415272 | orchestrator | 2026-01-01 00:15:45.415286 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-01 00:15:46.090860 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:46.090971 | orchestrator | 2026-01-01 00:15:46.090988 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-01 00:15:46.504265 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:46.504408 | orchestrator | 2026-01-01 00:15:46.504426 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-01 00:15:46.884488 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:46.884586 | orchestrator | 2026-01-01 00:15:46.884602 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-01 00:15:46.932003 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:46.932154 | orchestrator | 2026-01-01 00:15:46.932171 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-01 00:15:47.010634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-01 00:15:47.010763 | orchestrator | 2026-01-01 00:15:47.010780 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-01 00:15:47.071754 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:47.071856 | orchestrator | 2026-01-01 00:15:47.071872 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-01 00:15:49.200390 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-01 00:15:49.200527 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-01 00:15:49.200545 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-01 00:15:49.200557 | orchestrator | 2026-01-01 00:15:49.200570 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-01 00:15:49.926126 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:49.926201 | orchestrator | 2026-01-01 00:15:49.926210 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-01 00:15:50.701789 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:50.701920 | orchestrator | 2026-01-01 00:15:50.701941 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-01 00:15:51.441364 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:51.441509 | orchestrator | 2026-01-01 00:15:51.441536 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-01 00:15:51.515348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-01 00:15:51.515463 | orchestrator | 2026-01-01 00:15:51.515479 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-01 00:15:51.562379 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:51.562482 | orchestrator | 2026-01-01 00:15:51.562499 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-01 00:15:52.331126 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-01 00:15:52.331236 | orchestrator | 2026-01-01 00:15:52.331254 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-01 00:15:52.413203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-01 00:15:52.413296 | orchestrator | 2026-01-01 00:15:52.413305 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-01 00:15:53.147930 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:53.148085 | orchestrator | 2026-01-01 00:15:53.148101 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-01 00:15:53.750911 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:53.751007 | orchestrator | 2026-01-01 00:15:53.751023 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-01 00:15:53.804351 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:15:53.804442 | orchestrator | 2026-01-01 00:15:53.804460 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-01 00:15:53.868858 | orchestrator | ok: [testbed-manager] 2026-01-01 00:15:53.868971 | orchestrator | 2026-01-01 00:15:53.868990 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-01 00:15:54.730558 | orchestrator | changed: [testbed-manager] 2026-01-01 00:15:54.730679 | orchestrator | 2026-01-01 00:15:54.730696 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-01 00:17:05.232644 | orchestrator | changed: [testbed-manager] 2026-01-01 00:17:05.232786 | orchestrator | 2026-01-01 00:17:05.232803 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-01 00:17:06.265219 | orchestrator | ok: [testbed-manager] 2026-01-01 00:17:06.265343 | orchestrator | 2026-01-01 00:17:06.265360 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-01 00:17:06.322376 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:17:06.322490 | orchestrator | 2026-01-01 00:17:06.322506 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-01 00:17:09.338558 | orchestrator | changed: [testbed-manager] 2026-01-01 00:17:09.338677 | orchestrator | 2026-01-01 00:17:09.338731 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-01 00:17:09.384711 | orchestrator | ok: [testbed-manager] 2026-01-01 00:17:09.384819 | orchestrator | 2026-01-01 00:17:09.384834 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-01 00:17:09.384847 | orchestrator | 2026-01-01 00:17:09.384858 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-01 00:17:09.432841 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:17:09.432944 | orchestrator | 2026-01-01 00:17:09.432959 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-01 00:18:09.490299 | orchestrator | Pausing for 60 seconds 2026-01-01 00:18:09.490448 | orchestrator | changed: [testbed-manager] 2026-01-01 00:18:09.490466 | orchestrator | 2026-01-01 00:18:09.490480 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-01 00:18:12.654623 | orchestrator | changed: [testbed-manager] 2026-01-01 00:18:12.654731 | orchestrator | 2026-01-01 00:18:12.654739 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-01 00:19:14.738766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-01 00:19:14.738927 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-01 00:19:14.738983 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-01 00:19:14.738996 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:14.739009 | orchestrator | 2026-01-01 00:19:14.739020 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-01 00:19:25.570638 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:25.570775 | orchestrator | 2026-01-01 00:19:25.570792 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-01 00:19:25.655890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-01 00:19:25.656075 | orchestrator | 2026-01-01 00:19:25.656093 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-01 00:19:25.656107 | orchestrator | 2026-01-01 00:19:25.656119 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-01 00:19:25.694075 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:25.694182 | orchestrator | 2026-01-01 00:19:25.694196 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-01 00:19:25.759729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-01 00:19:25.759839 | orchestrator | 2026-01-01 00:19:25.759854 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-01 00:19:26.424440 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:26.424566 | orchestrator | 2026-01-01 00:19:26.424583 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-01 00:19:29.653552 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:29.653694 | orchestrator | 2026-01-01 00:19:29.653712 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-01 00:19:29.734675 | orchestrator | ok: [testbed-manager] => { 2026-01-01 00:19:29.734793 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-01 00:19:29.734811 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-01 00:19:29.734824 | orchestrator | "Checking running containers against expected versions...", 2026-01-01 00:19:29.734836 | orchestrator | "", 2026-01-01 00:19:29.734848 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-01 00:19:29.734860 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-01 00:19:29.734871 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.734882 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-01 00:19:29.734894 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.734905 | orchestrator | "", 2026-01-01 00:19:29.734916 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-01 00:19:29.735004 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-01 00:19:29.735020 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735031 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-01 00:19:29.735042 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735053 | orchestrator | "", 2026-01-01 00:19:29.735065 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-01 00:19:29.735076 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-01 00:19:29.735087 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735097 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-01 00:19:29.735109 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735120 | orchestrator | "", 2026-01-01 00:19:29.735131 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-01 00:19:29.735143 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-01 00:19:29.735154 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735165 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-01 00:19:29.735205 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735219 | orchestrator | "", 2026-01-01 00:19:29.735232 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-01 00:19:29.735246 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-01 00:19:29.735258 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735271 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-01-01 00:19:29.735283 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735295 | orchestrator | "", 2026-01-01 00:19:29.735308 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-01 00:19:29.735321 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735333 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735346 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735358 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735371 | orchestrator | "", 2026-01-01 00:19:29.735384 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-01 00:19:29.735398 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-01 00:19:29.735411 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735423 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-01 00:19:29.735437 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735450 | orchestrator | "", 2026-01-01 00:19:29.735462 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-01 00:19:29.735473 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-01 00:19:29.735484 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735504 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-01 00:19:29.735521 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735533 | orchestrator | "", 2026-01-01 00:19:29.735544 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-01 00:19:29.735555 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-01 00:19:29.735566 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735577 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-01 00:19:29.735588 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735599 | orchestrator | "", 2026-01-01 00:19:29.735610 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-01 00:19:29.735621 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-01 00:19:29.735632 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735643 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-01 00:19:29.735654 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735665 | orchestrator | "", 2026-01-01 00:19:29.735676 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-01 00:19:29.735687 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735698 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735708 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735719 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735730 | orchestrator | "", 2026-01-01 00:19:29.735741 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-01 00:19:29.735752 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735763 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735774 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735785 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735796 | orchestrator | "", 2026-01-01 00:19:29.735807 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-01 00:19:29.735818 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735828 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735839 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735850 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735861 | orchestrator | "", 2026-01-01 00:19:29.735872 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-01 00:19:29.735891 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735903 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.735913 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.735925 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.735967 | orchestrator | "", 2026-01-01 00:19:29.735978 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-01 00:19:29.736011 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.736023 | orchestrator | " Enabled: true", 2026-01-01 00:19:29.736034 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-01 00:19:29.736045 | orchestrator | " Status: ✅ MATCH", 2026-01-01 00:19:29.736056 | orchestrator | "", 2026-01-01 00:19:29.736067 | orchestrator | "=== Summary ===", 2026-01-01 00:19:29.736078 | orchestrator | "Errors (version mismatches): 0", 2026-01-01 00:19:29.736089 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-01 00:19:29.736100 | orchestrator | "", 2026-01-01 00:19:29.736111 | orchestrator | "✅ All running containers match expected versions!" 2026-01-01 00:19:29.736122 | orchestrator | ] 2026-01-01 00:19:29.736134 | orchestrator | } 2026-01-01 00:19:29.736146 | orchestrator | 2026-01-01 00:19:29.736157 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-01 00:19:29.803458 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:29.803573 | orchestrator | 2026-01-01 00:19:29.803589 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:19:29.803603 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-01 00:19:29.803614 | orchestrator | 2026-01-01 00:19:29.917823 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-01 00:19:29.917997 | orchestrator | + deactivate 2026-01-01 00:19:29.918080 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-01 00:19:29.918096 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-01 00:19:29.918108 | orchestrator | + export PATH 2026-01-01 00:19:29.918119 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-01 00:19:29.918240 | orchestrator | + '[' -n '' ']' 2026-01-01 00:19:29.918254 | orchestrator | + hash -r 2026-01-01 00:19:29.918266 | orchestrator | + '[' -n '' ']' 2026-01-01 00:19:29.918277 | orchestrator | + unset VIRTUAL_ENV 2026-01-01 00:19:29.918303 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-01 00:19:29.918314 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-01 00:19:29.918325 | orchestrator | + unset -f deactivate 2026-01-01 00:19:29.918337 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-01 00:19:29.928721 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-01 00:19:29.928776 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-01 00:19:29.928789 | orchestrator | + local max_attempts=60 2026-01-01 00:19:29.928802 | orchestrator | + local name=ceph-ansible 2026-01-01 00:19:29.928813 | orchestrator | + local attempt_num=1 2026-01-01 00:19:29.929684 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:19:29.968675 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:19:29.968734 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-01 00:19:29.968749 | orchestrator | + local max_attempts=60 2026-01-01 00:19:29.968761 | orchestrator | + local name=kolla-ansible 2026-01-01 00:19:29.968772 | orchestrator | + local attempt_num=1 2026-01-01 00:19:29.969859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-01 00:19:30.011072 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:19:30.011127 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-01 00:19:30.011141 | orchestrator | + local max_attempts=60 2026-01-01 00:19:30.011153 | orchestrator | + local name=osism-ansible 2026-01-01 00:19:30.011164 | orchestrator | + local attempt_num=1 2026-01-01 00:19:30.011902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-01 00:19:30.047914 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:19:30.048047 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-01 00:19:30.048064 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-01 00:19:30.756505 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-01 00:19:30.947548 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-01 00:19:30.947675 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-01 00:19:30.947691 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-01 00:19:30.947704 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-01 00:19:30.947719 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-01 00:19:30.947732 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:19:30.947743 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:19:30.947755 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-01 00:19:30.947795 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:19:30.947807 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-01 00:19:30.947818 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:19:30.947829 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-01 00:19:30.947840 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-01 00:19:30.948001 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-01 00:19:30.948017 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-01 00:19:30.948028 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-01 00:19:30.954102 | orchestrator | ++ semver latest 7.0.0 2026-01-01 00:19:31.022546 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:19:31.022652 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:19:31.022668 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-01 00:19:31.027588 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-01 00:19:43.361380 | orchestrator | 2026-01-01 00:19:43 | INFO  | Task 146ea461-f6f8-412e-bb88-ee118ff9d316 (resolvconf) was prepared for execution. 2026-01-01 00:19:43.361573 | orchestrator | 2026-01-01 00:19:43 | INFO  | It takes a moment until task 146ea461-f6f8-412e-bb88-ee118ff9d316 (resolvconf) has been started and output is visible here. 2026-01-01 00:19:58.071387 | orchestrator | 2026-01-01 00:19:58.071495 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-01 00:19:58.071514 | orchestrator | 2026-01-01 00:19:58.071527 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:19:58.071539 | orchestrator | Thursday 01 January 2026 00:19:47 +0000 (0:00:00.147) 0:00:00.147 ****** 2026-01-01 00:19:58.071550 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:58.071563 | orchestrator | 2026-01-01 00:19:58.071574 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-01 00:19:58.071586 | orchestrator | Thursday 01 January 2026 00:19:51 +0000 (0:00:03.932) 0:00:04.079 ****** 2026-01-01 00:19:58.071597 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:58.071609 | orchestrator | 2026-01-01 00:19:58.071620 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-01 00:19:58.071631 | orchestrator | Thursday 01 January 2026 00:19:51 +0000 (0:00:00.076) 0:00:04.156 ****** 2026-01-01 00:19:58.071642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-01 00:19:58.071654 | orchestrator | 2026-01-01 00:19:58.071666 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-01 00:19:58.071677 | orchestrator | Thursday 01 January 2026 00:19:51 +0000 (0:00:00.098) 0:00:04.255 ****** 2026-01-01 00:19:58.071688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:19:58.071699 | orchestrator | 2026-01-01 00:19:58.071710 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-01 00:19:58.071731 | orchestrator | Thursday 01 January 2026 00:19:51 +0000 (0:00:00.076) 0:00:04.332 ****** 2026-01-01 00:19:58.071744 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:58.071755 | orchestrator | 2026-01-01 00:19:58.071766 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-01 00:19:58.071777 | orchestrator | Thursday 01 January 2026 00:19:53 +0000 (0:00:01.208) 0:00:05.541 ****** 2026-01-01 00:19:58.071788 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:58.071799 | orchestrator | 2026-01-01 00:19:58.071810 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-01 00:19:58.071821 | orchestrator | Thursday 01 January 2026 00:19:53 +0000 (0:00:00.065) 0:00:05.606 ****** 2026-01-01 00:19:58.071832 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:58.071843 | orchestrator | 2026-01-01 00:19:58.071854 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-01 00:19:58.071865 | orchestrator | Thursday 01 January 2026 00:19:53 +0000 (0:00:00.557) 0:00:06.164 ****** 2026-01-01 00:19:58.071876 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:19:58.071887 | orchestrator | 2026-01-01 00:19:58.071898 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-01 00:19:58.071910 | orchestrator | Thursday 01 January 2026 00:19:53 +0000 (0:00:00.082) 0:00:06.246 ****** 2026-01-01 00:19:58.071951 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:58.071963 | orchestrator | 2026-01-01 00:19:58.071974 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-01 00:19:58.071985 | orchestrator | Thursday 01 January 2026 00:19:54 +0000 (0:00:00.558) 0:00:06.804 ****** 2026-01-01 00:19:58.071996 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:58.072007 | orchestrator | 2026-01-01 00:19:58.072018 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-01 00:19:58.072029 | orchestrator | Thursday 01 January 2026 00:19:55 +0000 (0:00:01.186) 0:00:07.991 ****** 2026-01-01 00:19:58.072062 | orchestrator | ok: [testbed-manager] 2026-01-01 00:19:58.072074 | orchestrator | 2026-01-01 00:19:58.072085 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-01 00:19:58.072096 | orchestrator | Thursday 01 January 2026 00:19:56 +0000 (0:00:01.027) 0:00:09.019 ****** 2026-01-01 00:19:58.072107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-01 00:19:58.072119 | orchestrator | 2026-01-01 00:19:58.072130 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-01 00:19:58.072141 | orchestrator | Thursday 01 January 2026 00:19:56 +0000 (0:00:00.076) 0:00:09.096 ****** 2026-01-01 00:19:58.072152 | orchestrator | changed: [testbed-manager] 2026-01-01 00:19:58.072164 | orchestrator | 2026-01-01 00:19:58.072175 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:19:58.072187 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:19:58.072198 | orchestrator | 2026-01-01 00:19:58.072209 | orchestrator | 2026-01-01 00:19:58.072220 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:19:58.072231 | orchestrator | Thursday 01 January 2026 00:19:57 +0000 (0:00:01.189) 0:00:10.285 ****** 2026-01-01 00:19:58.072242 | orchestrator | =============================================================================== 2026-01-01 00:19:58.072253 | orchestrator | Gathering Facts --------------------------------------------------------- 3.93s 2026-01-01 00:19:58.072264 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.21s 2026-01-01 00:19:58.072275 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-01-01 00:19:58.072286 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.19s 2026-01-01 00:19:58.072297 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.03s 2026-01-01 00:19:58.072308 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-01-01 00:19:58.072336 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-01-01 00:19:58.072349 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2026-01-01 00:19:58.072360 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-01-01 00:19:58.072371 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-01 00:19:58.072382 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-01-01 00:19:58.072393 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-01-01 00:19:58.072404 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-01 00:19:58.417342 | orchestrator | + osism apply sshconfig 2026-01-01 00:20:10.662821 | orchestrator | 2026-01-01 00:20:10 | INFO  | Task dd69c60a-de65-41c1-8e67-065c5775ad88 (sshconfig) was prepared for execution. 2026-01-01 00:20:10.662967 | orchestrator | 2026-01-01 00:20:10 | INFO  | It takes a moment until task dd69c60a-de65-41c1-8e67-065c5775ad88 (sshconfig) has been started and output is visible here. 2026-01-01 00:20:23.002662 | orchestrator | 2026-01-01 00:20:23.002781 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-01 00:20:23.002794 | orchestrator | 2026-01-01 00:20:23.002802 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-01 00:20:23.002811 | orchestrator | Thursday 01 January 2026 00:20:15 +0000 (0:00:00.174) 0:00:00.174 ****** 2026-01-01 00:20:23.002820 | orchestrator | ok: [testbed-manager] 2026-01-01 00:20:23.002841 | orchestrator | 2026-01-01 00:20:23.002850 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-01 00:20:23.002858 | orchestrator | Thursday 01 January 2026 00:20:15 +0000 (0:00:00.553) 0:00:00.728 ****** 2026-01-01 00:20:23.002892 | orchestrator | changed: [testbed-manager] 2026-01-01 00:20:23.002901 | orchestrator | 2026-01-01 00:20:23.002909 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-01 00:20:23.002974 | orchestrator | Thursday 01 January 2026 00:20:16 +0000 (0:00:00.547) 0:00:01.276 ****** 2026-01-01 00:20:23.002986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-01 00:20:23.002999 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-01 00:20:23.003009 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-01 00:20:23.003017 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-01 00:20:23.003024 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-01 00:20:23.003032 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-01 00:20:23.003039 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-01 00:20:23.003047 | orchestrator | 2026-01-01 00:20:23.003054 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-01 00:20:23.003062 | orchestrator | Thursday 01 January 2026 00:20:22 +0000 (0:00:05.859) 0:00:07.136 ****** 2026-01-01 00:20:23.003069 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:20:23.003077 | orchestrator | 2026-01-01 00:20:23.003084 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-01 00:20:23.003091 | orchestrator | Thursday 01 January 2026 00:20:22 +0000 (0:00:00.089) 0:00:07.225 ****** 2026-01-01 00:20:23.003099 | orchestrator | changed: [testbed-manager] 2026-01-01 00:20:23.003107 | orchestrator | 2026-01-01 00:20:23.003114 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:20:23.003123 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:20:23.003131 | orchestrator | 2026-01-01 00:20:23.003139 | orchestrator | 2026-01-01 00:20:23.003146 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:20:23.003154 | orchestrator | Thursday 01 January 2026 00:20:22 +0000 (0:00:00.576) 0:00:07.802 ****** 2026-01-01 00:20:23.003161 | orchestrator | =============================================================================== 2026-01-01 00:20:23.003169 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.86s 2026-01-01 00:20:23.003177 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-01-01 00:20:23.003184 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-01-01 00:20:23.003192 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-01-01 00:20:23.003201 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-01-01 00:20:23.332508 | orchestrator | + osism apply known-hosts 2026-01-01 00:20:35.520200 | orchestrator | 2026-01-01 00:20:35 | INFO  | Task 393c9114-aa5a-45e4-bc78-745d9ec2ae32 (known-hosts) was prepared for execution. 2026-01-01 00:20:35.520380 | orchestrator | 2026-01-01 00:20:35 | INFO  | It takes a moment until task 393c9114-aa5a-45e4-bc78-745d9ec2ae32 (known-hosts) has been started and output is visible here. 2026-01-01 00:20:52.950345 | orchestrator | 2026-01-01 00:20:52.950492 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-01 00:20:52.950526 | orchestrator | 2026-01-01 00:20:52.950549 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-01 00:20:52.950572 | orchestrator | Thursday 01 January 2026 00:20:39 +0000 (0:00:00.165) 0:00:00.165 ****** 2026-01-01 00:20:52.950591 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-01 00:20:52.950613 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-01 00:20:52.950634 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-01 00:20:52.950653 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-01 00:20:52.950702 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-01 00:20:52.950723 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-01 00:20:52.950743 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-01 00:20:52.950761 | orchestrator | 2026-01-01 00:20:52.950781 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-01 00:20:52.950803 | orchestrator | Thursday 01 January 2026 00:20:45 +0000 (0:00:06.221) 0:00:06.386 ****** 2026-01-01 00:20:52.950826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-01 00:20:52.950848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-01 00:20:52.950883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-01 00:20:52.950934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-01 00:20:52.950956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-01 00:20:52.950976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-01 00:20:52.950995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-01 00:20:52.951014 | orchestrator | 2026-01-01 00:20:52.951032 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:20:52.951051 | orchestrator | Thursday 01 January 2026 00:20:46 +0000 (0:00:00.167) 0:00:06.554 ****** 2026-01-01 00:20:52.951071 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCsrrK9XUIWhj264YEMvtZ22+ocW0nEklVUHLB3lr8pW9ohrMLh6QCcQwYuaJCDfClSciTLCrKY3eso7GPTSSwY=) 2026-01-01 00:20:52.951096 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtdP7ag+CMEyK+2bYGz5RjwasmHfXQZ5gdtLAwTHbPSRk1aT+kaR8DlgG+1IT7j2Tngm5bckRp29ejXPPEZYyaxh13XaL2uj68/iywgQXomfH0DIfEfiZbPeYcYFvOTgqu//lDctQoPYFiVMSuozoccmLqU2b4o5lCj+vUH1x4cuztGmJXJE+NvjOAf+2dBhUecncVLChq7Vf3sR9FH+h8lYhyYX5MGV4Ku3HZNe1DKIf1fa7lz56sBTZKi8t7oJpuVXplO7qLhCfniO8he64aYjYHwUJ7GoVbPXQwr79cIcbnWnGVUzz5VBTQdufDgGLqqK9T8g/5ZEo3g3BDs2BbmHXJKRYFDIt3GHIhc/97Gwvmyne+fkKfaJkaunbQSJjk0pq8Dy4UMZ0a8Scw12ztCbT54wLouROHXDaYomKRGlURPMVdoFUoX1NwCSaPHcwqMQnKrafZb4PpJorSj7kU0tIYlyRzrlNGWFsswQiyg1L//2mgjkpN/x15+xFvyFc=) 2026-01-01 00:20:52.951126 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEqsOKN/z3M8CogBJf9AKOejFzYhqJrg7FJ5Ld4WhQRm) 2026-01-01 00:20:52.951147 | orchestrator | 2026-01-01 00:20:52.951165 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:20:52.951184 | orchestrator | Thursday 01 January 2026 00:20:47 +0000 (0:00:01.204) 0:00:07.759 ****** 2026-01-01 00:20:52.951203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLvLSsFD3qTabXCPAS2eYSCyKi3SVlYHvyzjfiqNWY4JZXb3b6ldgTcq5KM/7+o6rQNK3loaZNNRlTyhrZgSoPU=) 2026-01-01 00:20:52.951262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnhQu+c0Y3n6ypot/WlPWj9Bw1B4wKSSCdbO7E0H2tAeAXqlDTPWoJBkA7eFPq97ZyzbpqfA49iXgdf7DhGAp8lhsFBZGPRZjnFeU8VkqQxKk/tE1eyYVe6pnCVJb2kGvRawUvzlKaKDcwbnsXOEB3dF2xuEZwLUxvjb2PwheRSgc4ymi5YO3bx/HANlVEdYc29+7C3WQ972Vhbmm3tQV7i+lxzgngHwus6tgH+Nf1lnXsr/zx+41G/t/7DkNDCC4PHcfpF7m25ibndHW7FqJ+E3QYMnlMr6ynDgEP09hKoZSXg/IqjI/cqMGIl2IhkegB2TfKHI4MwWwFFVwC2R9KoYyP5KNXvq4ZGAGqpPk5WwiRRixp+r7RP8GHo8oviWWIEP6iX5+u05kab8EZOahYp1XDrihWZgX1oWTHmy+fS/GtyZm6/UgxoNt4equw0OsCuDGRPb2/SFF4tHmpTuuJeM52IalJmMdwsweN8qea8FeoB44112A7yalccNK7c/M=) 2026-01-01 00:20:52.951299 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIChl3pbY/rOl9RZ/9lx/SpsUE4WGC4YdaTBroVm7rlTl) 2026-01-01 00:20:52.951319 | orchestrator | 2026-01-01 00:20:52.951339 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:20:52.951359 | orchestrator | Thursday 01 January 2026 00:20:48 +0000 (0:00:01.165) 0:00:08.924 ****** 2026-01-01 00:20:52.951379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINGJQFQKNhbkhdzekVzeKckeAjkRtpUm4IvlrehXlWYa) 2026-01-01 00:20:52.951400 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaFjDQX3FjfF4yOuA3wwxGPHZsOC0T2XeIt4Tpd4XY3whSpbkbHGC1krN41KFwFysOWayqAcdF7uOGQR/XJltNQLMOovoYAwatOuGSbd7VqCAoVcq11pr2r3pmUPv+N4p6E0iblKndu4b78KWKjdcDnuXw+h/ai9SYekhuyeNd6v/CVs89P2dayB072scbVtcAbWWc0n+aXEWNxTd28Faqqm3Oay/kIbJUR/F/3cF10KWnB8Htclw2JIhNC0MF4PVZux1PyFpjeEnF1Ars0Cyzz1II7eMXO037LqHmRr+VsbrcW/RotGX0tzdxBnibI3KavkQZZEkSH8H9jEpzt3ri1Y0Mw/WUYgI91QpCFZBbCP9lwuVkDWyardzqJ/J/aCoIcLGmzs6DkVtF1IJnGqPA+zOocryOuBAwFggmACZRo7s8MKnUI7JApNYNXtCjv70qXbeD/4u6R2+IfTIJEPtkfckmtu52QyPyVtfIoX7Da53zxh7LHTGS794bnVGT0Ec=) 2026-01-01 00:20:52.951420 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIkDAnI0T798VvzO+fO8Zv3z1zBzBic/NQHukIv8yL/CPXyZ24etsPVYj4w5kQIwU3D2bpFYHKWOKPwJ2oDNBYo=) 2026-01-01 00:20:52.951438 | orchestrator | 2026-01-01 00:20:52.951456 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:20:52.951473 | orchestrator | Thursday 01 January 2026 00:20:49 +0000 (0:00:01.077) 0:00:10.002 ****** 2026-01-01 00:20:52.951582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuSEoLluOJQzpkGlfHcACYhP35XaAX3Eu12y2aUoMErXI22ITH7NALduxqpESCsig7QOpvvM373Z5+Fja0Brl1Cbh+RBp1vQON80HKahn010IEbPBbIuMv+2BXwocIXG4K96Btzm10LwIek1iTDU3d0Yxv9lyBLhKjPrmlKRskHQmc/2stw5gzBiSRQK2LwCZGvKc4h+o5mBHzXRaTfuFLRStLmVw0UyyDQIVXoXlXNXEsi17rNLtbD0ZbF32WSetWwPa7CdXofMU0eV6Cmic6DSW2QhLblmPCrb9CWl9xdvs1yhshPUIp+Vu2SFKS5Nxea9KdCgdkKaZprfJsvKY1rWdeZ4UDfrTS56s30Io//Vrrj6w3qunuXz5fh0xVmIZY8dUpQDe6p2FMtPvKF4FmYIW38/F4kvIhpPRYCyTHbPqCVdWk5R2KHTjI8sWeGKpD+reEbaDrTto+XWF5+X8x5SlkL5LdeNParZoZDK1Ta4xlAbqnEoXyQ/blXUSUWL0=) 2026-01-01 00:20:52.951604 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPqnHhTs5aKZ9KEhRVsxC9u03+hTfn/N+bN47AZcHWjZ3AyYw6ixg6SRuekVe+jODf64oyY0kSR5fP1hmIt5rn4=) 2026-01-01 00:20:52.951624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINBxldjbFQcjXEJ/E077d1HkgB2PB18zV0M+9advmK2s) 2026-01-01 00:20:52.951643 | orchestrator | 2026-01-01 00:20:52.951658 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:20:52.951670 | orchestrator | Thursday 01 January 2026 00:20:50 +0000 (0:00:01.131) 0:00:11.133 ****** 2026-01-01 00:20:52.951681 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5bAOfWAQZ2sXdBqIjvsZFFuPFOtJ1GZDz6KVBkUtw3A6SOAVGoQ7uKViaEEEiHy91jjuJyNgafacwI1i32OmWE7gv6W8keEdPhoxDfK5RSsbuQ608htWA1IcI+F1oXqjIL4zXsKq/KdwtU/39MJ0yYEl22v/XFbSprDJdvRxaflS1ButK1NiO3GZzfbP04cLBQdsUFpD5oo6me31NOT3Vu0HMGtUvK/2L875KM0j6nRBBu8my7pZwswatqqDXWl4pCaYgEZ5fVPWsyfryEADX1Q/Si7xIWTHlbMOnQi/nbkWlpXiyvMvImyKQPSSE2IJNsh6EaxMAePN4pIzScdIoOwLScdD8wMWM8OUF+GHGgoK1rQjm7s7hFolWHQHyws/32Y4B7Qfmgde39n6HcULHT74dyPWTt8cAeiLsNuQS9fC06HbuOAPxjVI6HS7Xz3+8VTi0ZjmB+/JS1NdUkrXm+PzH5pvbBEL6yl+Jj9CpT+Kp90seBhL8GVRbCmG4C3U=) 2026-01-01 00:20:52.951702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgdNt4gKudym8XfmeUm+Yp5gATc0PSXfhqtHpfAIgTL8iXIW/vSnfJNLB62FyAWkfPAWZUq/SOxK2qdHWs6B0M=) 2026-01-01 00:20:52.951713 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/oQdmKbN7DZYIRrMbpJxqb08v4wUVUIfpqnODxiCff) 2026-01-01 00:20:52.951725 | orchestrator | 2026-01-01 00:20:52.951735 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:20:52.951746 | orchestrator | Thursday 01 January 2026 00:20:51 +0000 (0:00:01.140) 0:00:12.274 ****** 2026-01-01 00:20:52.951769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKsMjv5t4yYz9MkE+aA7Vc32FhnMks07XpQ1uKcAXMRLcRpCLCN1iHFL+jyCYuBLJ7ixk8NM1qyIWxaqil3F0Uk=) 2026-01-01 00:21:04.321866 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuGLdqJq4rSmyhW5nqhinA//aoiHUrzi/qCPBDPrKDN78lggcbf28kXcHb+Nsyr6Yf6acxoQ6fbMkku38/n4dcxPTaRv806ZJnL1HdB/53NCazdlonPGH+/Uk+IsFuvMRD+6YLICBFh8EfSRemOf0gHMPdRt/K3MaewC8ReQm36bX70Wgeet4a7BilJDTm6OgEPPbTGdf1BqJIrD2BPBUw+WNr7o7fxoSsLMKTbJou/Bf4KQxAp9gs+/nTza0NuXFaWcvc8fQHnNjd6/G1rzuxXwjMgJWDEs5dGp65exQVWnWIrxw/bJErInOYVvEH1TVxYzwSbKvEMwvDgQSxb026eMUyGlomyjC6owS6rOqXzocjiJGlJTr8uTyctXlY0Xr8smmV68K+w76i1+QrC3D0m3t+pSsBuLkg4xVvEp9hP6DcI9CMBa4KoKJnmubZM16jqS/iE/t3ZMfWGCX18wZXu/pZdO+dWNaEKYFA5q+CL1+2LgA+OL0iZ5A70qD8UI0=) 2026-01-01 00:21:04.322751 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKFij++7KOSiVAUxzpImSK44bKNW8bAle60Ke8fx9e89) 2026-01-01 00:21:04.322770 | orchestrator | 2026-01-01 00:21:04.322778 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:04.322785 | orchestrator | Thursday 01 January 2026 00:20:52 +0000 (0:00:01.068) 0:00:13.342 ****** 2026-01-01 00:21:04.322792 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5SOopgwbyBPdMH4zUfnrUqXGXT8LfsC71vM7oP1ahoz1C1nTKn7TBLnsyiQq0RBhGj7MfhWAqXOgFtiFUfUPjeef0wzSxRkVwGph5Ovd5PbuZdj9z22xCmF8767uWQqR1RPK9xuAGTnA73HEc4tBUTodw5iCCCZHiDOLc/EtFsZxr0ZQZXFWMZXWZoDxowDxpcFdwy17qxt53BCZkBZe09jcmrrvIuuFok3IhDkQX5HJ0vuY2krNg2/SadXP1vEXOQoCFET7JpfMNAyWdBH5Tm34kErDhaLH1+NYCiX90O4e9f62ieg8q7UUJYZvw0j7ewDa2G1HNjhC68uzykkjmniCC7V1XEyR537T1C8fEXnrYx3KUxl+olW3YFwalPlVivKhzzVY2oo5jCPlcku7SCAEnGtnniq7VyyiH2KukeRjZ4h7yxfnBYmqDToQ73/RUANdFaIE/vRahlV04xqsHc6OZ4L5Nq61K1z1TkWGrm4aSIjaD68xtSHUhEokfGU8=) 2026-01-01 00:21:04.322799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3ZnYpNrtFWaowdqioq2+o/HALoMSJSRZHht+NHaENrDiyMeu7r4JjhkvphQ4gsFwGTD5gZi5IjMLijPt33i2g=) 2026-01-01 00:21:04.322805 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIYqpWHw8vIsuYD46YJA004ray+vX1fimVkbLgG/jxh) 2026-01-01 00:21:04.322810 | orchestrator | 2026-01-01 00:21:04.322816 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-01 00:21:04.322822 | orchestrator | Thursday 01 January 2026 00:20:54 +0000 (0:00:01.142) 0:00:14.485 ****** 2026-01-01 00:21:04.322829 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-01 00:21:04.322835 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-01 00:21:04.322840 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-01 00:21:04.322845 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-01 00:21:04.322850 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-01 00:21:04.322855 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-01 00:21:04.322860 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-01 00:21:04.322865 | orchestrator | 2026-01-01 00:21:04.322870 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-01 00:21:04.322896 | orchestrator | Thursday 01 January 2026 00:20:59 +0000 (0:00:05.584) 0:00:20.070 ****** 2026-01-01 00:21:04.322921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-01 00:21:04.322928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-01 00:21:04.322933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-01 00:21:04.322953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-01 00:21:04.322958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-01 00:21:04.322963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-01 00:21:04.322968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-01 00:21:04.322973 | orchestrator | 2026-01-01 00:21:04.322993 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:04.322998 | orchestrator | Thursday 01 January 2026 00:20:59 +0000 (0:00:00.191) 0:00:20.262 ****** 2026-01-01 00:21:04.323003 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEqsOKN/z3M8CogBJf9AKOejFzYhqJrg7FJ5Ld4WhQRm) 2026-01-01 00:21:04.323011 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtdP7ag+CMEyK+2bYGz5RjwasmHfXQZ5gdtLAwTHbPSRk1aT+kaR8DlgG+1IT7j2Tngm5bckRp29ejXPPEZYyaxh13XaL2uj68/iywgQXomfH0DIfEfiZbPeYcYFvOTgqu//lDctQoPYFiVMSuozoccmLqU2b4o5lCj+vUH1x4cuztGmJXJE+NvjOAf+2dBhUecncVLChq7Vf3sR9FH+h8lYhyYX5MGV4Ku3HZNe1DKIf1fa7lz56sBTZKi8t7oJpuVXplO7qLhCfniO8he64aYjYHwUJ7GoVbPXQwr79cIcbnWnGVUzz5VBTQdufDgGLqqK9T8g/5ZEo3g3BDs2BbmHXJKRYFDIt3GHIhc/97Gwvmyne+fkKfaJkaunbQSJjk0pq8Dy4UMZ0a8Scw12ztCbT54wLouROHXDaYomKRGlURPMVdoFUoX1NwCSaPHcwqMQnKrafZb4PpJorSj7kU0tIYlyRzrlNGWFsswQiyg1L//2mgjkpN/x15+xFvyFc=) 2026-01-01 00:21:04.323016 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCsrrK9XUIWhj264YEMvtZ22+ocW0nEklVUHLB3lr8pW9ohrMLh6QCcQwYuaJCDfClSciTLCrKY3eso7GPTSSwY=) 2026-01-01 00:21:04.323021 | orchestrator | 2026-01-01 00:21:04.323026 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:04.323031 | orchestrator | Thursday 01 January 2026 00:21:00 +0000 (0:00:01.133) 0:00:21.395 ****** 2026-01-01 00:21:04.323036 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnhQu+c0Y3n6ypot/WlPWj9Bw1B4wKSSCdbO7E0H2tAeAXqlDTPWoJBkA7eFPq97ZyzbpqfA49iXgdf7DhGAp8lhsFBZGPRZjnFeU8VkqQxKk/tE1eyYVe6pnCVJb2kGvRawUvzlKaKDcwbnsXOEB3dF2xuEZwLUxvjb2PwheRSgc4ymi5YO3bx/HANlVEdYc29+7C3WQ972Vhbmm3tQV7i+lxzgngHwus6tgH+Nf1lnXsr/zx+41G/t/7DkNDCC4PHcfpF7m25ibndHW7FqJ+E3QYMnlMr6ynDgEP09hKoZSXg/IqjI/cqMGIl2IhkegB2TfKHI4MwWwFFVwC2R9KoYyP5KNXvq4ZGAGqpPk5WwiRRixp+r7RP8GHo8oviWWIEP6iX5+u05kab8EZOahYp1XDrihWZgX1oWTHmy+fS/GtyZm6/UgxoNt4equw0OsCuDGRPb2/SFF4tHmpTuuJeM52IalJmMdwsweN8qea8FeoB44112A7yalccNK7c/M=) 2026-01-01 00:21:04.323041 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIChl3pbY/rOl9RZ/9lx/SpsUE4WGC4YdaTBroVm7rlTl) 2026-01-01 00:21:04.323054 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLvLSsFD3qTabXCPAS2eYSCyKi3SVlYHvyzjfiqNWY4JZXb3b6ldgTcq5KM/7+o6rQNK3loaZNNRlTyhrZgSoPU=) 2026-01-01 00:21:04.323059 | orchestrator | 2026-01-01 00:21:04.323064 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:04.323069 | orchestrator | Thursday 01 January 2026 00:21:02 +0000 (0:00:01.058) 0:00:22.454 ****** 2026-01-01 00:21:04.323074 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINGJQFQKNhbkhdzekVzeKckeAjkRtpUm4IvlrehXlWYa) 2026-01-01 00:21:04.323079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaFjDQX3FjfF4yOuA3wwxGPHZsOC0T2XeIt4Tpd4XY3whSpbkbHGC1krN41KFwFysOWayqAcdF7uOGQR/XJltNQLMOovoYAwatOuGSbd7VqCAoVcq11pr2r3pmUPv+N4p6E0iblKndu4b78KWKjdcDnuXw+h/ai9SYekhuyeNd6v/CVs89P2dayB072scbVtcAbWWc0n+aXEWNxTd28Faqqm3Oay/kIbJUR/F/3cF10KWnB8Htclw2JIhNC0MF4PVZux1PyFpjeEnF1Ars0Cyzz1II7eMXO037LqHmRr+VsbrcW/RotGX0tzdxBnibI3KavkQZZEkSH8H9jEpzt3ri1Y0Mw/WUYgI91QpCFZBbCP9lwuVkDWyardzqJ/J/aCoIcLGmzs6DkVtF1IJnGqPA+zOocryOuBAwFggmACZRo7s8MKnUI7JApNYNXtCjv70qXbeD/4u6R2+IfTIJEPtkfckmtu52QyPyVtfIoX7Da53zxh7LHTGS794bnVGT0Ec=) 2026-01-01 00:21:04.323085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIkDAnI0T798VvzO+fO8Zv3z1zBzBic/NQHukIv8yL/CPXyZ24etsPVYj4w5kQIwU3D2bpFYHKWOKPwJ2oDNBYo=) 2026-01-01 00:21:04.323090 | orchestrator | 2026-01-01 00:21:04.323095 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:04.323099 | orchestrator | Thursday 01 January 2026 00:21:03 +0000 (0:00:01.123) 0:00:23.577 ****** 2026-01-01 00:21:04.323108 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuSEoLluOJQzpkGlfHcACYhP35XaAX3Eu12y2aUoMErXI22ITH7NALduxqpESCsig7QOpvvM373Z5+Fja0Brl1Cbh+RBp1vQON80HKahn010IEbPBbIuMv+2BXwocIXG4K96Btzm10LwIek1iTDU3d0Yxv9lyBLhKjPrmlKRskHQmc/2stw5gzBiSRQK2LwCZGvKc4h+o5mBHzXRaTfuFLRStLmVw0UyyDQIVXoXlXNXEsi17rNLtbD0ZbF32WSetWwPa7CdXofMU0eV6Cmic6DSW2QhLblmPCrb9CWl9xdvs1yhshPUIp+Vu2SFKS5Nxea9KdCgdkKaZprfJsvKY1rWdeZ4UDfrTS56s30Io//Vrrj6w3qunuXz5fh0xVmIZY8dUpQDe6p2FMtPvKF4FmYIW38/F4kvIhpPRYCyTHbPqCVdWk5R2KHTjI8sWeGKpD+reEbaDrTto+XWF5+X8x5SlkL5LdeNParZoZDK1Ta4xlAbqnEoXyQ/blXUSUWL0=) 2026-01-01 00:21:08.969127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPqnHhTs5aKZ9KEhRVsxC9u03+hTfn/N+bN47AZcHWjZ3AyYw6ixg6SRuekVe+jODf64oyY0kSR5fP1hmIt5rn4=) 2026-01-01 00:21:08.969240 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINBxldjbFQcjXEJ/E077d1HkgB2PB18zV0M+9advmK2s) 2026-01-01 00:21:08.969252 | orchestrator | 2026-01-01 00:21:08.969261 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:08.969270 | orchestrator | Thursday 01 January 2026 00:21:04 +0000 (0:00:01.134) 0:00:24.711 ****** 2026-01-01 00:21:08.969278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgdNt4gKudym8XfmeUm+Yp5gATc0PSXfhqtHpfAIgTL8iXIW/vSnfJNLB62FyAWkfPAWZUq/SOxK2qdHWs6B0M=) 2026-01-01 00:21:08.969285 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/oQdmKbN7DZYIRrMbpJxqb08v4wUVUIfpqnODxiCff) 2026-01-01 00:21:08.969316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5bAOfWAQZ2sXdBqIjvsZFFuPFOtJ1GZDz6KVBkUtw3A6SOAVGoQ7uKViaEEEiHy91jjuJyNgafacwI1i32OmWE7gv6W8keEdPhoxDfK5RSsbuQ608htWA1IcI+F1oXqjIL4zXsKq/KdwtU/39MJ0yYEl22v/XFbSprDJdvRxaflS1ButK1NiO3GZzfbP04cLBQdsUFpD5oo6me31NOT3Vu0HMGtUvK/2L875KM0j6nRBBu8my7pZwswatqqDXWl4pCaYgEZ5fVPWsyfryEADX1Q/Si7xIWTHlbMOnQi/nbkWlpXiyvMvImyKQPSSE2IJNsh6EaxMAePN4pIzScdIoOwLScdD8wMWM8OUF+GHGgoK1rQjm7s7hFolWHQHyws/32Y4B7Qfmgde39n6HcULHT74dyPWTt8cAeiLsNuQS9fC06HbuOAPxjVI6HS7Xz3+8VTi0ZjmB+/JS1NdUkrXm+PzH5pvbBEL6yl+Jj9CpT+Kp90seBhL8GVRbCmG4C3U=) 2026-01-01 00:21:08.969351 | orchestrator | 2026-01-01 00:21:08.969363 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:08.969371 | orchestrator | Thursday 01 January 2026 00:21:05 +0000 (0:00:01.125) 0:00:25.836 ****** 2026-01-01 00:21:08.969378 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuGLdqJq4rSmyhW5nqhinA//aoiHUrzi/qCPBDPrKDN78lggcbf28kXcHb+Nsyr6Yf6acxoQ6fbMkku38/n4dcxPTaRv806ZJnL1HdB/53NCazdlonPGH+/Uk+IsFuvMRD+6YLICBFh8EfSRemOf0gHMPdRt/K3MaewC8ReQm36bX70Wgeet4a7BilJDTm6OgEPPbTGdf1BqJIrD2BPBUw+WNr7o7fxoSsLMKTbJou/Bf4KQxAp9gs+/nTza0NuXFaWcvc8fQHnNjd6/G1rzuxXwjMgJWDEs5dGp65exQVWnWIrxw/bJErInOYVvEH1TVxYzwSbKvEMwvDgQSxb026eMUyGlomyjC6owS6rOqXzocjiJGlJTr8uTyctXlY0Xr8smmV68K+w76i1+QrC3D0m3t+pSsBuLkg4xVvEp9hP6DcI9CMBa4KoKJnmubZM16jqS/iE/t3ZMfWGCX18wZXu/pZdO+dWNaEKYFA5q+CL1+2LgA+OL0iZ5A70qD8UI0=) 2026-01-01 00:21:08.969386 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKsMjv5t4yYz9MkE+aA7Vc32FhnMks07XpQ1uKcAXMRLcRpCLCN1iHFL+jyCYuBLJ7ixk8NM1qyIWxaqil3F0Uk=) 2026-01-01 00:21:08.969394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKFij++7KOSiVAUxzpImSK44bKNW8bAle60Ke8fx9e89) 2026-01-01 00:21:08.969401 | orchestrator | 2026-01-01 00:21:08.969409 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-01 00:21:08.969416 | orchestrator | Thursday 01 January 2026 00:21:06 +0000 (0:00:01.158) 0:00:26.996 ****** 2026-01-01 00:21:08.969423 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIYqpWHw8vIsuYD46YJA004ray+vX1fimVkbLgG/jxh) 2026-01-01 00:21:08.969431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5SOopgwbyBPdMH4zUfnrUqXGXT8LfsC71vM7oP1ahoz1C1nTKn7TBLnsyiQq0RBhGj7MfhWAqXOgFtiFUfUPjeef0wzSxRkVwGph5Ovd5PbuZdj9z22xCmF8767uWQqR1RPK9xuAGTnA73HEc4tBUTodw5iCCCZHiDOLc/EtFsZxr0ZQZXFWMZXWZoDxowDxpcFdwy17qxt53BCZkBZe09jcmrrvIuuFok3IhDkQX5HJ0vuY2krNg2/SadXP1vEXOQoCFET7JpfMNAyWdBH5Tm34kErDhaLH1+NYCiX90O4e9f62ieg8q7UUJYZvw0j7ewDa2G1HNjhC68uzykkjmniCC7V1XEyR537T1C8fEXnrYx3KUxl+olW3YFwalPlVivKhzzVY2oo5jCPlcku7SCAEnGtnniq7VyyiH2KukeRjZ4h7yxfnBYmqDToQ73/RUANdFaIE/vRahlV04xqsHc6OZ4L5Nq61K1z1TkWGrm4aSIjaD68xtSHUhEokfGU8=) 2026-01-01 00:21:08.969439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3ZnYpNrtFWaowdqioq2+o/HALoMSJSRZHht+NHaENrDiyMeu7r4JjhkvphQ4gsFwGTD5gZi5IjMLijPt33i2g=) 2026-01-01 00:21:08.969446 | orchestrator | 2026-01-01 00:21:08.969454 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-01 00:21:08.969461 | orchestrator | Thursday 01 January 2026 00:21:07 +0000 (0:00:01.145) 0:00:28.141 ****** 2026-01-01 00:21:08.969469 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-01 00:21:08.969478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-01 00:21:08.969501 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-01 00:21:08.969508 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-01 00:21:08.969515 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-01 00:21:08.969523 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-01 00:21:08.969530 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-01 00:21:08.969537 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:21:08.969545 | orchestrator | 2026-01-01 00:21:08.969552 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-01 00:21:08.969560 | orchestrator | Thursday 01 January 2026 00:21:07 +0000 (0:00:00.173) 0:00:28.314 ****** 2026-01-01 00:21:08.969567 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:21:08.969574 | orchestrator | 2026-01-01 00:21:08.969587 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-01 00:21:08.969595 | orchestrator | Thursday 01 January 2026 00:21:07 +0000 (0:00:00.056) 0:00:28.370 ****** 2026-01-01 00:21:08.969602 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:21:08.969609 | orchestrator | 2026-01-01 00:21:08.969616 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-01 00:21:08.969623 | orchestrator | Thursday 01 January 2026 00:21:08 +0000 (0:00:00.060) 0:00:28.431 ****** 2026-01-01 00:21:08.969631 | orchestrator | changed: [testbed-manager] 2026-01-01 00:21:08.969638 | orchestrator | 2026-01-01 00:21:08.969645 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:21:08.969653 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:21:08.969663 | orchestrator | 2026-01-01 00:21:08.969670 | orchestrator | 2026-01-01 00:21:08.969677 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:21:08.969684 | orchestrator | Thursday 01 January 2026 00:21:08 +0000 (0:00:00.729) 0:00:29.160 ****** 2026-01-01 00:21:08.969692 | orchestrator | =============================================================================== 2026-01-01 00:21:08.969699 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.22s 2026-01-01 00:21:08.969706 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.58s 2026-01-01 00:21:08.969714 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-01-01 00:21:08.969722 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-01-01 00:21:08.969729 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-01-01 00:21:08.969736 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-01 00:21:08.969743 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-01 00:21:08.969751 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-01 00:21:08.969758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-01 00:21:08.969765 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-01 00:21:08.969772 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-01 00:21:08.969780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-01 00:21:08.969787 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-01-01 00:21:08.969794 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-01 00:21:08.969801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-01 00:21:08.969809 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-01 00:21:08.969816 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2026-01-01 00:21:08.969823 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-01-01 00:21:08.969831 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-01-01 00:21:08.969838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-01-01 00:21:09.298558 | orchestrator | + osism apply squid 2026-01-01 00:21:21.628037 | orchestrator | 2026-01-01 00:21:21 | INFO  | Task e368925d-2da5-4f4b-ab27-24dae8b11a3e (squid) was prepared for execution. 2026-01-01 00:21:21.628138 | orchestrator | 2026-01-01 00:21:21 | INFO  | It takes a moment until task e368925d-2da5-4f4b-ab27-24dae8b11a3e (squid) has been started and output is visible here. 2026-01-01 00:23:16.189795 | orchestrator | 2026-01-01 00:23:16.189989 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-01 00:23:16.190112 | orchestrator | 2026-01-01 00:23:16.190129 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-01 00:23:16.190142 | orchestrator | Thursday 01 January 2026 00:21:26 +0000 (0:00:00.178) 0:00:00.178 ****** 2026-01-01 00:23:16.190155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:23:16.190167 | orchestrator | 2026-01-01 00:23:16.190178 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-01 00:23:16.190189 | orchestrator | Thursday 01 January 2026 00:21:26 +0000 (0:00:00.083) 0:00:00.262 ****** 2026-01-01 00:23:16.190200 | orchestrator | ok: [testbed-manager] 2026-01-01 00:23:16.190213 | orchestrator | 2026-01-01 00:23:16.190246 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-01 00:23:16.190257 | orchestrator | Thursday 01 January 2026 00:21:27 +0000 (0:00:01.714) 0:00:01.976 ****** 2026-01-01 00:23:16.190269 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-01 00:23:16.190281 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-01 00:23:16.190292 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-01 00:23:16.190304 | orchestrator | 2026-01-01 00:23:16.190317 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-01 00:23:16.190329 | orchestrator | Thursday 01 January 2026 00:21:29 +0000 (0:00:01.188) 0:00:03.165 ****** 2026-01-01 00:23:16.190343 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-01 00:23:16.190355 | orchestrator | 2026-01-01 00:23:16.190368 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-01 00:23:16.190380 | orchestrator | Thursday 01 January 2026 00:21:30 +0000 (0:00:01.177) 0:00:04.343 ****** 2026-01-01 00:23:16.190393 | orchestrator | ok: [testbed-manager] 2026-01-01 00:23:16.190406 | orchestrator | 2026-01-01 00:23:16.190419 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-01 00:23:16.190432 | orchestrator | Thursday 01 January 2026 00:21:30 +0000 (0:00:00.385) 0:00:04.729 ****** 2026-01-01 00:23:16.190444 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:16.190457 | orchestrator | 2026-01-01 00:23:16.190469 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-01 00:23:16.190483 | orchestrator | Thursday 01 January 2026 00:21:31 +0000 (0:00:00.973) 0:00:05.703 ****** 2026-01-01 00:23:16.190495 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-01 00:23:16.190508 | orchestrator | ok: [testbed-manager] 2026-01-01 00:23:16.190520 | orchestrator | 2026-01-01 00:23:16.190533 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-01 00:23:16.190546 | orchestrator | Thursday 01 January 2026 00:22:03 +0000 (0:00:31.428) 0:00:37.132 ****** 2026-01-01 00:23:16.190558 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:16.190570 | orchestrator | 2026-01-01 00:23:16.190583 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-01 00:23:16.190600 | orchestrator | Thursday 01 January 2026 00:22:15 +0000 (0:00:12.018) 0:00:49.150 ****** 2026-01-01 00:23:16.190614 | orchestrator | Pausing for 60 seconds 2026-01-01 00:23:16.190626 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:16.190639 | orchestrator | 2026-01-01 00:23:16.190652 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-01 00:23:16.190665 | orchestrator | Thursday 01 January 2026 00:23:15 +0000 (0:01:00.098) 0:01:49.248 ****** 2026-01-01 00:23:16.190676 | orchestrator | ok: [testbed-manager] 2026-01-01 00:23:16.190687 | orchestrator | 2026-01-01 00:23:16.190698 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-01 00:23:16.190709 | orchestrator | Thursday 01 January 2026 00:23:15 +0000 (0:00:00.065) 0:01:49.313 ****** 2026-01-01 00:23:16.190720 | orchestrator | changed: [testbed-manager] 2026-01-01 00:23:16.190731 | orchestrator | 2026-01-01 00:23:16.190742 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:23:16.190761 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:23:16.190772 | orchestrator | 2026-01-01 00:23:16.190783 | orchestrator | 2026-01-01 00:23:16.190795 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:23:16.190806 | orchestrator | Thursday 01 January 2026 00:23:15 +0000 (0:00:00.655) 0:01:49.969 ****** 2026-01-01 00:23:16.190817 | orchestrator | =============================================================================== 2026-01-01 00:23:16.190828 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-01-01 00:23:16.190839 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.43s 2026-01-01 00:23:16.190880 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.02s 2026-01-01 00:23:16.190892 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.71s 2026-01-01 00:23:16.190903 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-01-01 00:23:16.190914 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.18s 2026-01-01 00:23:16.190926 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2026-01-01 00:23:16.190936 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-01-01 00:23:16.190947 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2026-01-01 00:23:16.190958 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-01 00:23:16.190969 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-01 00:23:16.532482 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-01 00:23:16.532617 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-01 00:23:16.538508 | orchestrator | + set -e 2026-01-01 00:23:16.538538 | orchestrator | + NAMESPACE=kolla 2026-01-01 00:23:16.538552 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-01 00:23:16.544198 | orchestrator | ++ semver latest 9.0.0 2026-01-01 00:23:16.612066 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-01 00:23:16.612199 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-01 00:23:16.613032 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-01 00:23:28.762795 | orchestrator | 2026-01-01 00:23:28 | INFO  | Task 29a3a526-d2b6-4b2f-8cfc-99426b205369 (operator) was prepared for execution. 2026-01-01 00:23:28.762981 | orchestrator | 2026-01-01 00:23:28 | INFO  | It takes a moment until task 29a3a526-d2b6-4b2f-8cfc-99426b205369 (operator) has been started and output is visible here. 2026-01-01 00:23:44.860474 | orchestrator | 2026-01-01 00:23:44.860609 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-01 00:23:44.860627 | orchestrator | 2026-01-01 00:23:44.860639 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-01 00:23:44.860652 | orchestrator | Thursday 01 January 2026 00:23:33 +0000 (0:00:00.151) 0:00:00.151 ****** 2026-01-01 00:23:44.860663 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:23:44.860676 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:23:44.860689 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:23:44.860700 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:23:44.860711 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:23:44.860722 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:23:44.860733 | orchestrator | 2026-01-01 00:23:44.860749 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-01 00:23:44.860761 | orchestrator | Thursday 01 January 2026 00:23:36 +0000 (0:00:03.250) 0:00:03.401 ****** 2026-01-01 00:23:44.860772 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:23:44.860783 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:23:44.860794 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:23:44.860805 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:23:44.860815 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:23:44.860905 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:23:44.860918 | orchestrator | 2026-01-01 00:23:44.860929 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-01 00:23:44.860940 | orchestrator | 2026-01-01 00:23:44.860951 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-01 00:23:44.860962 | orchestrator | Thursday 01 January 2026 00:23:37 +0000 (0:00:00.784) 0:00:04.186 ****** 2026-01-01 00:23:44.860973 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:23:44.860986 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:23:44.860999 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:23:44.861011 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:23:44.861024 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:23:44.861036 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:23:44.861049 | orchestrator | 2026-01-01 00:23:44.861062 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-01 00:23:44.861075 | orchestrator | Thursday 01 January 2026 00:23:37 +0000 (0:00:00.194) 0:00:04.380 ****** 2026-01-01 00:23:44.861088 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:23:44.861100 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:23:44.861113 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:23:44.861126 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:23:44.861139 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:23:44.861152 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:23:44.861165 | orchestrator | 2026-01-01 00:23:44.861177 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-01 00:23:44.861190 | orchestrator | Thursday 01 January 2026 00:23:37 +0000 (0:00:00.190) 0:00:04.571 ****** 2026-01-01 00:23:44.861203 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:23:44.861217 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:23:44.861230 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:23:44.861243 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:23:44.861255 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:23:44.861267 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:23:44.861279 | orchestrator | 2026-01-01 00:23:44.861292 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-01 00:23:44.861304 | orchestrator | Thursday 01 January 2026 00:23:38 +0000 (0:00:00.613) 0:00:05.185 ****** 2026-01-01 00:23:44.861318 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:23:44.861330 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:23:44.861341 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:23:44.861352 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:23:44.861363 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:23:44.861374 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:23:44.861385 | orchestrator | 2026-01-01 00:23:44.861396 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-01 00:23:44.861407 | orchestrator | Thursday 01 January 2026 00:23:38 +0000 (0:00:00.797) 0:00:05.982 ****** 2026-01-01 00:23:44.861418 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-01 00:23:44.861430 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-01 00:23:44.861441 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-01 00:23:44.861452 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-01 00:23:44.861463 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-01 00:23:44.861474 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-01 00:23:44.861485 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-01 00:23:44.861496 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-01 00:23:44.861507 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-01 00:23:44.861517 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-01 00:23:44.861528 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-01 00:23:44.861539 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-01 00:23:44.861550 | orchestrator | 2026-01-01 00:23:44.861561 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-01 00:23:44.861581 | orchestrator | Thursday 01 January 2026 00:23:40 +0000 (0:00:01.214) 0:00:07.196 ****** 2026-01-01 00:23:44.861592 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:23:44.861603 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:23:44.861614 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:23:44.861647 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:23:44.861658 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:23:44.861669 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:23:44.861680 | orchestrator | 2026-01-01 00:23:44.861691 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-01 00:23:44.861703 | orchestrator | Thursday 01 January 2026 00:23:41 +0000 (0:00:01.219) 0:00:08.415 ****** 2026-01-01 00:23:44.861714 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-01 00:23:44.861724 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-01 00:23:44.861735 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-01 00:23:44.861746 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:23:44.861776 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:23:44.861788 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:23:44.861799 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:23:44.861810 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:23:44.861821 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-01 00:23:44.861832 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-01 00:23:44.861860 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-01 00:23:44.861872 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-01 00:23:44.861882 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-01 00:23:44.861893 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-01 00:23:44.861904 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-01 00:23:44.861915 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:23:44.861926 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:23:44.861937 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:23:44.861948 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:23:44.861959 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:23:44.861970 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-01 00:23:44.861981 | orchestrator | 2026-01-01 00:23:44.861992 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-01 00:23:44.862004 | orchestrator | Thursday 01 January 2026 00:23:42 +0000 (0:00:01.281) 0:00:09.697 ****** 2026-01-01 00:23:44.862075 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:44.862087 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:44.862098 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:44.862109 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:44.862120 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:44.862131 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:44.862142 | orchestrator | 2026-01-01 00:23:44.862158 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-01 00:23:44.862170 | orchestrator | Thursday 01 January 2026 00:23:42 +0000 (0:00:00.178) 0:00:09.875 ****** 2026-01-01 00:23:44.862181 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:44.862192 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:44.862202 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:44.862213 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:44.862233 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:44.862244 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:44.862255 | orchestrator | 2026-01-01 00:23:44.862266 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-01 00:23:44.862277 | orchestrator | Thursday 01 January 2026 00:23:43 +0000 (0:00:00.192) 0:00:10.068 ****** 2026-01-01 00:23:44.862288 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:23:44.862299 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:23:44.862310 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:23:44.862321 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:23:44.862331 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:23:44.862342 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:23:44.862353 | orchestrator | 2026-01-01 00:23:44.862364 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-01 00:23:44.862375 | orchestrator | Thursday 01 January 2026 00:23:43 +0000 (0:00:00.639) 0:00:10.707 ****** 2026-01-01 00:23:44.862386 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:44.862397 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:44.862408 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:44.862419 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:44.862430 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:44.862440 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:44.862451 | orchestrator | 2026-01-01 00:23:44.862462 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-01 00:23:44.862474 | orchestrator | Thursday 01 January 2026 00:23:43 +0000 (0:00:00.178) 0:00:10.886 ****** 2026-01-01 00:23:44.862485 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-01 00:23:44.862496 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:23:44.862507 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 00:23:44.862518 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:23:44.862529 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 00:23:44.862540 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:23:44.862551 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:23:44.862562 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:23:44.862573 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-01 00:23:44.862584 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:23:44.862595 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 00:23:44.862607 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:23:44.862618 | orchestrator | 2026-01-01 00:23:44.862629 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-01 00:23:44.862640 | orchestrator | Thursday 01 January 2026 00:23:44 +0000 (0:00:00.711) 0:00:11.598 ****** 2026-01-01 00:23:44.862650 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:44.862661 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:44.862672 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:44.862683 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:44.862694 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:44.862705 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:44.862716 | orchestrator | 2026-01-01 00:23:44.862727 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-01 00:23:44.862738 | orchestrator | Thursday 01 January 2026 00:23:44 +0000 (0:00:00.159) 0:00:11.758 ****** 2026-01-01 00:23:44.862749 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:44.862760 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:44.862771 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:44.862781 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:44.862800 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:46.225093 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:46.225224 | orchestrator | 2026-01-01 00:23:46.225242 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-01 00:23:46.225256 | orchestrator | Thursday 01 January 2026 00:23:44 +0000 (0:00:00.163) 0:00:11.921 ****** 2026-01-01 00:23:46.225296 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:46.225308 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:46.225319 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:46.225330 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:46.225341 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:46.225351 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:46.225362 | orchestrator | 2026-01-01 00:23:46.225374 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-01 00:23:46.225385 | orchestrator | Thursday 01 January 2026 00:23:45 +0000 (0:00:00.163) 0:00:12.085 ****** 2026-01-01 00:23:46.225395 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:23:46.225406 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:23:46.225417 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:23:46.225428 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:23:46.225438 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:23:46.225449 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:23:46.225460 | orchestrator | 2026-01-01 00:23:46.225471 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-01 00:23:46.225481 | orchestrator | Thursday 01 January 2026 00:23:45 +0000 (0:00:00.659) 0:00:12.744 ****** 2026-01-01 00:23:46.225492 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:23:46.225503 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:23:46.225514 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:23:46.225524 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:23:46.225535 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:23:46.225546 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:23:46.225556 | orchestrator | 2026-01-01 00:23:46.225567 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:23:46.225579 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:23:46.225593 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:23:46.225604 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:23:46.225615 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:23:46.225626 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:23:46.225637 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 00:23:46.225647 | orchestrator | 2026-01-01 00:23:46.225658 | orchestrator | 2026-01-01 00:23:46.225669 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:23:46.225680 | orchestrator | Thursday 01 January 2026 00:23:45 +0000 (0:00:00.252) 0:00:12.997 ****** 2026-01-01 00:23:46.225691 | orchestrator | =============================================================================== 2026-01-01 00:23:46.225702 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2026-01-01 00:23:46.225713 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2026-01-01 00:23:46.225725 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2026-01-01 00:23:46.225735 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2026-01-01 00:23:46.225746 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-01-01 00:23:46.225757 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-01-01 00:23:46.225768 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-01-01 00:23:46.225796 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-01-01 00:23:46.225807 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.64s 2026-01-01 00:23:46.225818 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-01-01 00:23:46.225829 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-01-01 00:23:46.225863 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-01-01 00:23:46.225875 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-01 00:23:46.225910 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-01-01 00:23:46.225921 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-01-01 00:23:46.225932 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-01-01 00:23:46.225943 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-01-01 00:23:46.225954 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-01-01 00:23:46.225965 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-01-01 00:23:46.565398 | orchestrator | + osism apply --environment custom facts 2026-01-01 00:23:48.519470 | orchestrator | 2026-01-01 00:23:48 | INFO  | Trying to run play facts in environment custom 2026-01-01 00:23:58.629134 | orchestrator | 2026-01-01 00:23:58 | INFO  | Task a27ad8da-3ccd-4547-a9a2-13ffd2d87804 (facts) was prepared for execution. 2026-01-01 00:23:58.629240 | orchestrator | 2026-01-01 00:23:58 | INFO  | It takes a moment until task a27ad8da-3ccd-4547-a9a2-13ffd2d87804 (facts) has been started and output is visible here. 2026-01-01 00:24:41.176107 | orchestrator | 2026-01-01 00:24:41.176248 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-01 00:24:41.176267 | orchestrator | 2026-01-01 00:24:41.176279 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-01 00:24:41.176291 | orchestrator | Thursday 01 January 2026 00:24:02 +0000 (0:00:00.087) 0:00:00.087 ****** 2026-01-01 00:24:41.176303 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:41.176315 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:24:41.176328 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:24:41.176338 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:24:41.176349 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:24:41.176360 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:24:41.176371 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:24:41.176382 | orchestrator | 2026-01-01 00:24:41.176393 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-01 00:24:41.176404 | orchestrator | Thursday 01 January 2026 00:24:04 +0000 (0:00:01.375) 0:00:01.463 ****** 2026-01-01 00:24:41.176415 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:41.176426 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:24:41.176436 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:24:41.176447 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:24:41.176458 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:24:41.176469 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:24:41.176480 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:24:41.176492 | orchestrator | 2026-01-01 00:24:41.176503 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-01 00:24:41.176514 | orchestrator | 2026-01-01 00:24:41.176525 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-01 00:24:41.176558 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:01.242) 0:00:02.705 ****** 2026-01-01 00:24:41.176569 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.176581 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.176592 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.176628 | orchestrator | 2026-01-01 00:24:41.176642 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-01 00:24:41.176655 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:00.107) 0:00:02.813 ****** 2026-01-01 00:24:41.176668 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.176680 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.176692 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.176704 | orchestrator | 2026-01-01 00:24:41.176716 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-01 00:24:41.176729 | orchestrator | Thursday 01 January 2026 00:24:05 +0000 (0:00:00.218) 0:00:03.031 ****** 2026-01-01 00:24:41.176741 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.176754 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.176766 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.176779 | orchestrator | 2026-01-01 00:24:41.176791 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-01 00:24:41.176804 | orchestrator | Thursday 01 January 2026 00:24:06 +0000 (0:00:00.222) 0:00:03.254 ****** 2026-01-01 00:24:41.176839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:24:41.176854 | orchestrator | 2026-01-01 00:24:41.176868 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-01 00:24:41.176880 | orchestrator | Thursday 01 January 2026 00:24:06 +0000 (0:00:00.163) 0:00:03.418 ****** 2026-01-01 00:24:41.176893 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.176906 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.176917 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.176930 | orchestrator | 2026-01-01 00:24:41.176942 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-01 00:24:41.176955 | orchestrator | Thursday 01 January 2026 00:24:06 +0000 (0:00:00.432) 0:00:03.850 ****** 2026-01-01 00:24:41.176967 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:24:41.176980 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:24:41.176990 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:24:41.177001 | orchestrator | 2026-01-01 00:24:41.177012 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-01 00:24:41.177023 | orchestrator | Thursday 01 January 2026 00:24:06 +0000 (0:00:00.151) 0:00:04.002 ****** 2026-01-01 00:24:41.177034 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:24:41.177045 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:24:41.177056 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:24:41.177067 | orchestrator | 2026-01-01 00:24:41.177078 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-01 00:24:41.177088 | orchestrator | Thursday 01 January 2026 00:24:07 +0000 (0:00:01.060) 0:00:05.063 ****** 2026-01-01 00:24:41.177099 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.177110 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.177121 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.177132 | orchestrator | 2026-01-01 00:24:41.177143 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-01 00:24:41.177154 | orchestrator | Thursday 01 January 2026 00:24:08 +0000 (0:00:00.465) 0:00:05.528 ****** 2026-01-01 00:24:41.177166 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:24:41.177177 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:24:41.177187 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:24:41.177198 | orchestrator | 2026-01-01 00:24:41.177209 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-01 00:24:41.177220 | orchestrator | Thursday 01 January 2026 00:24:09 +0000 (0:00:01.089) 0:00:06.617 ****** 2026-01-01 00:24:41.177231 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:24:41.177242 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:24:41.177253 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:24:41.177264 | orchestrator | 2026-01-01 00:24:41.177275 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-01 00:24:41.177294 | orchestrator | Thursday 01 January 2026 00:24:25 +0000 (0:00:15.851) 0:00:22.469 ****** 2026-01-01 00:24:41.177306 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:24:41.177317 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:24:41.177327 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:24:41.177338 | orchestrator | 2026-01-01 00:24:41.177349 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-01 00:24:41.177380 | orchestrator | Thursday 01 January 2026 00:24:25 +0000 (0:00:00.099) 0:00:22.568 ****** 2026-01-01 00:24:41.177392 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:24:41.177403 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:24:41.177413 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:24:41.177424 | orchestrator | 2026-01-01 00:24:41.177435 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-01 00:24:41.177446 | orchestrator | Thursday 01 January 2026 00:24:32 +0000 (0:00:07.104) 0:00:29.673 ****** 2026-01-01 00:24:41.177457 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.177468 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.177479 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.177490 | orchestrator | 2026-01-01 00:24:41.177501 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-01 00:24:41.177511 | orchestrator | Thursday 01 January 2026 00:24:32 +0000 (0:00:00.443) 0:00:30.117 ****** 2026-01-01 00:24:41.177522 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-01 00:24:41.177533 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-01 00:24:41.177544 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-01 00:24:41.177555 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-01 00:24:41.177566 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-01 00:24:41.177577 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-01 00:24:41.177588 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-01 00:24:41.177599 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-01 00:24:41.177610 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-01 00:24:41.177621 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-01 00:24:41.177632 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-01 00:24:41.177643 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-01 00:24:41.177654 | orchestrator | 2026-01-01 00:24:41.177665 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-01 00:24:41.177676 | orchestrator | Thursday 01 January 2026 00:24:36 +0000 (0:00:03.369) 0:00:33.486 ****** 2026-01-01 00:24:41.177687 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.177698 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.177709 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.177719 | orchestrator | 2026-01-01 00:24:41.177730 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:24:41.177741 | orchestrator | 2026-01-01 00:24:41.177752 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:24:41.177764 | orchestrator | Thursday 01 January 2026 00:24:37 +0000 (0:00:01.316) 0:00:34.803 ****** 2026-01-01 00:24:41.177775 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:24:41.177785 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:24:41.177796 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:24:41.177807 | orchestrator | ok: [testbed-manager] 2026-01-01 00:24:41.177845 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:24:41.177856 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:24:41.177867 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:24:41.177877 | orchestrator | 2026-01-01 00:24:41.177888 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:24:41.177908 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:24:41.177920 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:24:41.177933 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:24:41.177944 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:24:41.177955 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:24:41.177966 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:24:41.177977 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:24:41.177988 | orchestrator | 2026-01-01 00:24:41.177999 | orchestrator | 2026-01-01 00:24:41.178010 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:24:41.178099 | orchestrator | Thursday 01 January 2026 00:24:41 +0000 (0:00:03.597) 0:00:38.400 ****** 2026-01-01 00:24:41.178111 | orchestrator | =============================================================================== 2026-01-01 00:24:41.178122 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.85s 2026-01-01 00:24:41.178133 | orchestrator | Install required packages (Debian) -------------------------------------- 7.10s 2026-01-01 00:24:41.178144 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.60s 2026-01-01 00:24:41.178155 | orchestrator | Copy fact files --------------------------------------------------------- 3.37s 2026-01-01 00:24:41.178166 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-01-01 00:24:41.178177 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-01-01 00:24:41.178195 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2026-01-01 00:24:41.438987 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-01-01 00:24:41.439116 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-01-01 00:24:41.439132 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-01-01 00:24:41.439210 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-01-01 00:24:41.439225 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-01-01 00:24:41.439236 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-01 00:24:41.439247 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-01-01 00:24:41.439259 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-01-01 00:24:41.439271 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-01-01 00:24:41.439282 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-01-01 00:24:41.439293 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-01 00:24:41.793420 | orchestrator | + osism apply bootstrap 2026-01-01 00:24:54.167992 | orchestrator | 2026-01-01 00:24:54 | INFO  | Task 2a44fba9-b0c1-4081-ac57-bb0988845040 (bootstrap) was prepared for execution. 2026-01-01 00:24:54.168114 | orchestrator | 2026-01-01 00:24:54 | INFO  | It takes a moment until task 2a44fba9-b0c1-4081-ac57-bb0988845040 (bootstrap) has been started and output is visible here. 2026-01-01 00:25:10.445016 | orchestrator | 2026-01-01 00:25:10.445122 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-01 00:25:10.445134 | orchestrator | 2026-01-01 00:25:10.445141 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-01 00:25:10.445148 | orchestrator | Thursday 01 January 2026 00:24:58 +0000 (0:00:00.158) 0:00:00.158 ****** 2026-01-01 00:25:10.445155 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:10.445162 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:10.445169 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:10.445175 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:10.445181 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:10.445188 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:10.445194 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:10.445201 | orchestrator | 2026-01-01 00:25:10.445207 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:25:10.445214 | orchestrator | 2026-01-01 00:25:10.445221 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:25:10.445228 | orchestrator | Thursday 01 January 2026 00:24:58 +0000 (0:00:00.271) 0:00:00.429 ****** 2026-01-01 00:25:10.445235 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:10.445241 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:10.445248 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:10.445255 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:10.445262 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:10.445269 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:10.445276 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:10.445282 | orchestrator | 2026-01-01 00:25:10.445289 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-01 00:25:10.445295 | orchestrator | 2026-01-01 00:25:10.445302 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:25:10.445309 | orchestrator | Thursday 01 January 2026 00:25:02 +0000 (0:00:03.570) 0:00:04.000 ****** 2026-01-01 00:25:10.445317 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-01 00:25:10.445325 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-01 00:25:10.445331 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-01 00:25:10.445337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-01 00:25:10.445343 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-01 00:25:10.445350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:25:10.445356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-01 00:25:10.445363 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-01 00:25:10.445370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:25:10.445377 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-01 00:25:10.445383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-01 00:25:10.445390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:25:10.445397 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-01 00:25:10.445403 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-01 00:25:10.445410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 00:25:10.445417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 00:25:10.445424 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-01 00:25:10.445431 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:10.445438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-01 00:25:10.445445 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-01 00:25:10.445452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 00:25:10.445459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-01 00:25:10.445466 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-01 00:25:10.445479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-01 00:25:10.445485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-01 00:25:10.445491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-01 00:25:10.445497 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-01 00:25:10.445503 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:25:10.445508 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:25:10.445514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-01 00:25:10.445520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-01 00:25:10.445526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-01 00:25:10.445532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:25:10.445538 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-01 00:25:10.445545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-01 00:25:10.445551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:25:10.445558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-01 00:25:10.445565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-01 00:25:10.445571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:25:10.445578 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-01 00:25:10.445584 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-01 00:25:10.445592 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:25:10.445598 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:25:10.445606 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-01 00:25:10.445625 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-01 00:25:10.445633 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-01 00:25:10.445640 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-01 00:25:10.445661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-01 00:25:10.445669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-01 00:25:10.445676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-01 00:25:10.445683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-01 00:25:10.445690 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:25:10.445697 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-01 00:25:10.445704 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-01 00:25:10.445711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-01 00:25:10.445719 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:25:10.445725 | orchestrator | 2026-01-01 00:25:10.445732 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-01 00:25:10.445739 | orchestrator | 2026-01-01 00:25:10.445746 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-01 00:25:10.445753 | orchestrator | Thursday 01 January 2026 00:25:02 +0000 (0:00:00.505) 0:00:04.505 ****** 2026-01-01 00:25:10.445760 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:10.445767 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:10.445774 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:10.445780 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:10.445787 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:10.445794 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:10.445838 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:10.445846 | orchestrator | 2026-01-01 00:25:10.445853 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-01 00:25:10.445860 | orchestrator | Thursday 01 January 2026 00:25:04 +0000 (0:00:01.217) 0:00:05.723 ****** 2026-01-01 00:25:10.445868 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:10.445875 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:10.445888 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:10.445895 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:10.445901 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:10.445907 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:10.445914 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:10.445920 | orchestrator | 2026-01-01 00:25:10.445927 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-01 00:25:10.445934 | orchestrator | Thursday 01 January 2026 00:25:05 +0000 (0:00:01.292) 0:00:07.016 ****** 2026-01-01 00:25:10.445941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:25:10.445950 | orchestrator | 2026-01-01 00:25:10.445957 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-01 00:25:10.445963 | orchestrator | Thursday 01 January 2026 00:25:05 +0000 (0:00:00.312) 0:00:07.328 ****** 2026-01-01 00:25:10.445968 | orchestrator | changed: [testbed-manager] 2026-01-01 00:25:10.445974 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:25:10.445980 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:25:10.445986 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:10.445992 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:10.445997 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:10.446003 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:25:10.446009 | orchestrator | 2026-01-01 00:25:10.446072 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-01 00:25:10.446080 | orchestrator | Thursday 01 January 2026 00:25:07 +0000 (0:00:02.177) 0:00:09.505 ****** 2026-01-01 00:25:10.446087 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:10.446095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:25:10.446102 | orchestrator | 2026-01-01 00:25:10.446109 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-01 00:25:10.446116 | orchestrator | Thursday 01 January 2026 00:25:08 +0000 (0:00:00.278) 0:00:09.784 ****** 2026-01-01 00:25:10.446123 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:25:10.446129 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:25:10.446136 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:25:10.446143 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:10.446150 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:10.446158 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:10.446165 | orchestrator | 2026-01-01 00:25:10.446172 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-01 00:25:10.446179 | orchestrator | Thursday 01 January 2026 00:25:09 +0000 (0:00:01.048) 0:00:10.832 ****** 2026-01-01 00:25:10.446186 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:10.446193 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:25:10.446200 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:25:10.446207 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:10.446214 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:10.446221 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:25:10.446229 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:10.446236 | orchestrator | 2026-01-01 00:25:10.446243 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-01 00:25:10.446251 | orchestrator | Thursday 01 January 2026 00:25:09 +0000 (0:00:00.579) 0:00:11.412 ****** 2026-01-01 00:25:10.446258 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:25:10.446264 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:25:10.446271 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:25:10.446278 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:25:10.446285 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:25:10.446299 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:25:10.446306 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:10.446313 | orchestrator | 2026-01-01 00:25:10.446321 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-01 00:25:10.446329 | orchestrator | Thursday 01 January 2026 00:25:10 +0000 (0:00:00.427) 0:00:11.839 ****** 2026-01-01 00:25:10.446337 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:10.446344 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:25:10.446360 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:25:22.756835 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:25:22.756994 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:25:22.757010 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:25:22.757069 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:25:22.757084 | orchestrator | 2026-01-01 00:25:22.757097 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-01 00:25:22.757111 | orchestrator | Thursday 01 January 2026 00:25:10 +0000 (0:00:00.257) 0:00:12.097 ****** 2026-01-01 00:25:22.757125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:25:22.757157 | orchestrator | 2026-01-01 00:25:22.757169 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-01 00:25:22.757181 | orchestrator | Thursday 01 January 2026 00:25:10 +0000 (0:00:00.316) 0:00:12.413 ****** 2026-01-01 00:25:22.757192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:25:22.757204 | orchestrator | 2026-01-01 00:25:22.757215 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-01 00:25:22.757226 | orchestrator | Thursday 01 January 2026 00:25:11 +0000 (0:00:00.304) 0:00:12.718 ****** 2026-01-01 00:25:22.757237 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.757249 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.757260 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.757270 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.757281 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.757293 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.757306 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.757318 | orchestrator | 2026-01-01 00:25:22.757331 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-01 00:25:22.757358 | orchestrator | Thursday 01 January 2026 00:25:12 +0000 (0:00:01.428) 0:00:14.146 ****** 2026-01-01 00:25:22.757372 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:22.757384 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:25:22.757397 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:25:22.757411 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:25:22.757423 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:25:22.757436 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:25:22.757449 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:25:22.757463 | orchestrator | 2026-01-01 00:25:22.757477 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-01 00:25:22.757497 | orchestrator | Thursday 01 January 2026 00:25:12 +0000 (0:00:00.333) 0:00:14.480 ****** 2026-01-01 00:25:22.757518 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.757537 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.757558 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.757579 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.757601 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.757621 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.757634 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.757648 | orchestrator | 2026-01-01 00:25:22.757660 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-01 00:25:22.757697 | orchestrator | Thursday 01 January 2026 00:25:13 +0000 (0:00:00.526) 0:00:15.007 ****** 2026-01-01 00:25:22.757709 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:22.757719 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:25:22.757730 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:25:22.757741 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:25:22.757751 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:25:22.757762 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:25:22.757773 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:25:22.757784 | orchestrator | 2026-01-01 00:25:22.757819 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-01 00:25:22.757832 | orchestrator | Thursday 01 January 2026 00:25:13 +0000 (0:00:00.266) 0:00:15.273 ****** 2026-01-01 00:25:22.757843 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.757854 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:25:22.757865 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:25:22.757876 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:25:22.757887 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:22.757897 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:22.757908 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:22.757919 | orchestrator | 2026-01-01 00:25:22.757930 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-01 00:25:22.757941 | orchestrator | Thursday 01 January 2026 00:25:14 +0000 (0:00:00.545) 0:00:15.819 ****** 2026-01-01 00:25:22.757952 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.757963 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:25:22.757974 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:25:22.757984 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:25:22.757995 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:22.758006 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:22.758069 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:22.758084 | orchestrator | 2026-01-01 00:25:22.758095 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-01 00:25:22.758107 | orchestrator | Thursday 01 January 2026 00:25:15 +0000 (0:00:01.157) 0:00:16.976 ****** 2026-01-01 00:25:22.758117 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.758128 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.758139 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.758150 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.758161 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.758173 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.758184 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.758195 | orchestrator | 2026-01-01 00:25:22.758213 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-01 00:25:22.758224 | orchestrator | Thursday 01 January 2026 00:25:16 +0000 (0:00:01.040) 0:00:18.017 ****** 2026-01-01 00:25:22.758256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:25:22.758269 | orchestrator | 2026-01-01 00:25:22.758280 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-01 00:25:22.758291 | orchestrator | Thursday 01 January 2026 00:25:16 +0000 (0:00:00.332) 0:00:18.350 ****** 2026-01-01 00:25:22.758302 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:22.758313 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:25:22.758324 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:22.758334 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:25:22.758345 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:22.758356 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:22.758367 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:25:22.758378 | orchestrator | 2026-01-01 00:25:22.758389 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-01 00:25:22.758408 | orchestrator | Thursday 01 January 2026 00:25:18 +0000 (0:00:01.315) 0:00:19.665 ****** 2026-01-01 00:25:22.758419 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.758430 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.758441 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.758452 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.758462 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.758473 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.758484 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.758495 | orchestrator | 2026-01-01 00:25:22.758506 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-01 00:25:22.758516 | orchestrator | Thursday 01 January 2026 00:25:18 +0000 (0:00:00.267) 0:00:19.933 ****** 2026-01-01 00:25:22.758527 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.758538 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.758549 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.758560 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.758570 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.758581 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.758592 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.758603 | orchestrator | 2026-01-01 00:25:22.758614 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-01 00:25:22.758625 | orchestrator | Thursday 01 January 2026 00:25:18 +0000 (0:00:00.253) 0:00:20.186 ****** 2026-01-01 00:25:22.758636 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.758646 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.758657 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.758668 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.758678 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.758689 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.758700 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.758710 | orchestrator | 2026-01-01 00:25:22.758721 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-01 00:25:22.758732 | orchestrator | Thursday 01 January 2026 00:25:18 +0000 (0:00:00.246) 0:00:20.433 ****** 2026-01-01 00:25:22.758744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:25:22.758757 | orchestrator | 2026-01-01 00:25:22.758768 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-01 00:25:22.758779 | orchestrator | Thursday 01 January 2026 00:25:19 +0000 (0:00:00.320) 0:00:20.753 ****** 2026-01-01 00:25:22.758830 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.758850 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.758869 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.758885 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.758902 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.758920 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.758940 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.758961 | orchestrator | 2026-01-01 00:25:22.758980 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-01 00:25:22.759000 | orchestrator | Thursday 01 January 2026 00:25:19 +0000 (0:00:00.535) 0:00:21.289 ****** 2026-01-01 00:25:22.759011 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:25:22.759022 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:25:22.759033 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:25:22.759044 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:25:22.759054 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:25:22.759066 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:25:22.759085 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:25:22.759102 | orchestrator | 2026-01-01 00:25:22.759119 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-01 00:25:22.759135 | orchestrator | Thursday 01 January 2026 00:25:19 +0000 (0:00:00.241) 0:00:21.531 ****** 2026-01-01 00:25:22.759164 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.759181 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.759198 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.759214 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.759232 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:25:22.759248 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:25:22.759266 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:25:22.759284 | orchestrator | 2026-01-01 00:25:22.759304 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-01 00:25:22.759323 | orchestrator | Thursday 01 January 2026 00:25:21 +0000 (0:00:01.044) 0:00:22.575 ****** 2026-01-01 00:25:22.759337 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.759349 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.759360 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.759370 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.759381 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:25:22.759392 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:25:22.759403 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:25:22.759414 | orchestrator | 2026-01-01 00:25:22.759431 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-01 00:25:22.759443 | orchestrator | Thursday 01 January 2026 00:25:21 +0000 (0:00:00.575) 0:00:23.151 ****** 2026-01-01 00:25:22.759454 | orchestrator | ok: [testbed-manager] 2026-01-01 00:25:22.759465 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:25:22.759475 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:25:22.759486 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:25:22.759509 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:26:03.394546 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:26:03.394699 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:26:03.394716 | orchestrator | 2026-01-01 00:26:03.394730 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-01 00:26:03.394745 | orchestrator | Thursday 01 January 2026 00:25:22 +0000 (0:00:01.152) 0:00:24.304 ****** 2026-01-01 00:26:03.394758 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.394828 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.394841 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.394853 | orchestrator | changed: [testbed-manager] 2026-01-01 00:26:03.394866 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:26:03.394878 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:26:03.394890 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:26:03.394903 | orchestrator | 2026-01-01 00:26:03.394915 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-01 00:26:03.394927 | orchestrator | Thursday 01 January 2026 00:25:38 +0000 (0:00:15.538) 0:00:39.842 ****** 2026-01-01 00:26:03.394939 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.394951 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.394963 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.394976 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.394988 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.395000 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.395012 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.395024 | orchestrator | 2026-01-01 00:26:03.395036 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-01 00:26:03.395048 | orchestrator | Thursday 01 January 2026 00:25:38 +0000 (0:00:00.280) 0:00:40.122 ****** 2026-01-01 00:26:03.395061 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.395073 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.395086 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.395099 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.395111 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.395124 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.395137 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.395149 | orchestrator | 2026-01-01 00:26:03.395162 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-01 00:26:03.395174 | orchestrator | Thursday 01 January 2026 00:25:38 +0000 (0:00:00.224) 0:00:40.347 ****** 2026-01-01 00:26:03.395220 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.395233 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.395245 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.395257 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.395269 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.395281 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.395293 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.395305 | orchestrator | 2026-01-01 00:26:03.395317 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-01 00:26:03.395329 | orchestrator | Thursday 01 January 2026 00:25:39 +0000 (0:00:00.227) 0:00:40.574 ****** 2026-01-01 00:26:03.395343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:26:03.395358 | orchestrator | 2026-01-01 00:26:03.395370 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-01 00:26:03.395382 | orchestrator | Thursday 01 January 2026 00:25:39 +0000 (0:00:00.350) 0:00:40.924 ****** 2026-01-01 00:26:03.395394 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.395405 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.395417 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.395428 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.395439 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.395451 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.395462 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.395474 | orchestrator | 2026-01-01 00:26:03.395486 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-01 00:26:03.395498 | orchestrator | Thursday 01 January 2026 00:25:40 +0000 (0:00:01.553) 0:00:42.478 ****** 2026-01-01 00:26:03.395510 | orchestrator | changed: [testbed-manager] 2026-01-01 00:26:03.395522 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:26:03.395532 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:26:03.395543 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:26:03.395554 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:26:03.395564 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:26:03.395574 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:26:03.395585 | orchestrator | 2026-01-01 00:26:03.395596 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-01 00:26:03.395608 | orchestrator | Thursday 01 January 2026 00:25:42 +0000 (0:00:01.087) 0:00:43.565 ****** 2026-01-01 00:26:03.395619 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.395631 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.395643 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.395654 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.395666 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.395677 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.395688 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.395700 | orchestrator | 2026-01-01 00:26:03.395711 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-01 00:26:03.395723 | orchestrator | Thursday 01 January 2026 00:25:42 +0000 (0:00:00.806) 0:00:44.372 ****** 2026-01-01 00:26:03.395736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:26:03.395751 | orchestrator | 2026-01-01 00:26:03.395763 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-01 00:26:03.395816 | orchestrator | Thursday 01 January 2026 00:25:43 +0000 (0:00:00.329) 0:00:44.701 ****** 2026-01-01 00:26:03.395827 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:26:03.395838 | orchestrator | changed: [testbed-manager] 2026-01-01 00:26:03.395848 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:26:03.395858 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:26:03.395869 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:26:03.395893 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:26:03.395904 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:26:03.395915 | orchestrator | 2026-01-01 00:26:03.395953 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-01 00:26:03.395966 | orchestrator | Thursday 01 January 2026 00:25:44 +0000 (0:00:01.151) 0:00:45.853 ****** 2026-01-01 00:26:03.395978 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:26:03.395989 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:26:03.396001 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:26:03.396012 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:26:03.396023 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:26:03.396035 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:26:03.396046 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:26:03.396058 | orchestrator | 2026-01-01 00:26:03.396069 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-01 00:26:03.396081 | orchestrator | Thursday 01 January 2026 00:25:44 +0000 (0:00:00.228) 0:00:46.082 ****** 2026-01-01 00:26:03.396093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:26:03.396105 | orchestrator | 2026-01-01 00:26:03.396117 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-01 00:26:03.396129 | orchestrator | Thursday 01 January 2026 00:25:44 +0000 (0:00:00.346) 0:00:46.428 ****** 2026-01-01 00:26:03.396140 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.396151 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.396162 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.396174 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.396185 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.396196 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.396233 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.396244 | orchestrator | 2026-01-01 00:26:03.396256 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-01 00:26:03.396267 | orchestrator | Thursday 01 January 2026 00:25:46 +0000 (0:00:01.507) 0:00:47.936 ****** 2026-01-01 00:26:03.396279 | orchestrator | changed: [testbed-manager] 2026-01-01 00:26:03.396290 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:26:03.396302 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:26:03.396312 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:26:03.396323 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:26:03.396334 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:26:03.396346 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:26:03.396357 | orchestrator | 2026-01-01 00:26:03.396369 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-01 00:26:03.396380 | orchestrator | Thursday 01 January 2026 00:25:47 +0000 (0:00:01.122) 0:00:49.059 ****** 2026-01-01 00:26:03.396393 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:26:03.396404 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:26:03.396416 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:26:03.396427 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:26:03.396439 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:26:03.396451 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:26:03.396462 | orchestrator | changed: [testbed-manager] 2026-01-01 00:26:03.396474 | orchestrator | 2026-01-01 00:26:03.396486 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-01 00:26:03.396498 | orchestrator | Thursday 01 January 2026 00:26:00 +0000 (0:00:12.913) 0:01:01.972 ****** 2026-01-01 00:26:03.396509 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.396521 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.396533 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.396544 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.396556 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.396567 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.396589 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.396600 | orchestrator | 2026-01-01 00:26:03.396612 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-01 00:26:03.396624 | orchestrator | Thursday 01 January 2026 00:26:01 +0000 (0:00:01.256) 0:01:03.229 ****** 2026-01-01 00:26:03.396636 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.396647 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.396659 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.396671 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.396683 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.396694 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.396706 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.396716 | orchestrator | 2026-01-01 00:26:03.396727 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-01 00:26:03.396738 | orchestrator | Thursday 01 January 2026 00:26:02 +0000 (0:00:00.918) 0:01:04.148 ****** 2026-01-01 00:26:03.396750 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.396760 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.396829 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.396842 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.396853 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.396864 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.396876 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.396888 | orchestrator | 2026-01-01 00:26:03.396900 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-01 00:26:03.396912 | orchestrator | Thursday 01 January 2026 00:26:02 +0000 (0:00:00.227) 0:01:04.375 ****** 2026-01-01 00:26:03.396923 | orchestrator | ok: [testbed-manager] 2026-01-01 00:26:03.396934 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:26:03.396945 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:26:03.396955 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:26:03.396967 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:26:03.396978 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:26:03.396989 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:26:03.397000 | orchestrator | 2026-01-01 00:26:03.397011 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-01 00:26:03.397023 | orchestrator | Thursday 01 January 2026 00:26:03 +0000 (0:00:00.245) 0:01:04.621 ****** 2026-01-01 00:26:03.397044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:26:03.397058 | orchestrator | 2026-01-01 00:26:03.397083 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-01 00:28:21.734130 | orchestrator | Thursday 01 January 2026 00:26:03 +0000 (0:00:00.322) 0:01:04.943 ****** 2026-01-01 00:28:21.734278 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.734297 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:21.734310 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.734321 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.734332 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.734344 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.734355 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.734367 | orchestrator | 2026-01-01 00:28:21.734379 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-01 00:28:21.734391 | orchestrator | Thursday 01 January 2026 00:26:05 +0000 (0:00:01.669) 0:01:06.612 ****** 2026-01-01 00:28:21.734402 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:21.734415 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:21.734426 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:21.734437 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:21.734447 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:21.734458 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:21.734469 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:21.734480 | orchestrator | 2026-01-01 00:28:21.734522 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-01 00:28:21.734535 | orchestrator | Thursday 01 January 2026 00:26:05 +0000 (0:00:00.543) 0:01:07.156 ****** 2026-01-01 00:28:21.734546 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:21.734557 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.734568 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.734582 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.734595 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.734607 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.734619 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.734631 | orchestrator | 2026-01-01 00:28:21.734644 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-01 00:28:21.734657 | orchestrator | Thursday 01 January 2026 00:26:05 +0000 (0:00:00.236) 0:01:07.392 ****** 2026-01-01 00:28:21.734670 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:21.734682 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.734694 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.734726 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.734739 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.734752 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.734764 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.734776 | orchestrator | 2026-01-01 00:28:21.734792 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-01 00:28:21.734812 | orchestrator | Thursday 01 January 2026 00:26:06 +0000 (0:00:01.147) 0:01:08.540 ****** 2026-01-01 00:28:21.734832 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:21.734852 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:21.734870 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:21.734890 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:21.734908 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:21.734926 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:21.734945 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:21.734963 | orchestrator | 2026-01-01 00:28:21.734984 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-01 00:28:21.735004 | orchestrator | Thursday 01 January 2026 00:26:08 +0000 (0:00:01.712) 0:01:10.252 ****** 2026-01-01 00:28:21.735024 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.735043 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.735055 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.735065 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.735077 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:21.735087 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.735098 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.735110 | orchestrator | 2026-01-01 00:28:21.735121 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-01 00:28:21.735132 | orchestrator | Thursday 01 January 2026 00:26:11 +0000 (0:00:02.397) 0:01:12.650 ****** 2026-01-01 00:28:21.735143 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:21.735154 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.735164 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.735175 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.735186 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.735197 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.735207 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.735218 | orchestrator | 2026-01-01 00:28:21.735229 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-01 00:28:21.735240 | orchestrator | Thursday 01 January 2026 00:26:46 +0000 (0:00:35.059) 0:01:47.709 ****** 2026-01-01 00:28:21.735251 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:21.735262 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:21.735272 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:21.735283 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:21.735294 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:21.735304 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:21.735315 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:21.735338 | orchestrator | 2026-01-01 00:28:21.735349 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-01 00:28:21.735359 | orchestrator | Thursday 01 January 2026 00:28:05 +0000 (0:01:18.978) 0:03:06.687 ****** 2026-01-01 00:28:21.735370 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.735381 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:21.735392 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.735402 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.735413 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.735424 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.735435 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.735445 | orchestrator | 2026-01-01 00:28:21.735456 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-01 00:28:21.735468 | orchestrator | Thursday 01 January 2026 00:28:06 +0000 (0:00:01.729) 0:03:08.416 ****** 2026-01-01 00:28:21.735478 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:21.735489 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:21.735500 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:21.735510 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:21.735539 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:21.735550 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:21.735561 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:21.735572 | orchestrator | 2026-01-01 00:28:21.735583 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-01 00:28:21.735594 | orchestrator | Thursday 01 January 2026 00:28:20 +0000 (0:00:13.604) 0:03:22.021 ****** 2026-01-01 00:28:21.735640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-01 00:28:21.735667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-01 00:28:21.735681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-01 00:28:21.735695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-01 00:28:21.735744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-01 00:28:21.735756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-01 00:28:21.735776 | orchestrator | 2026-01-01 00:28:21.735794 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-01 00:28:21.735815 | orchestrator | Thursday 01 January 2026 00:28:20 +0000 (0:00:00.412) 0:03:22.433 ****** 2026-01-01 00:28:21.735839 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:28:21.735859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:28:21.735878 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:21.735897 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:28:21.735917 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:21.735938 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:21.735957 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-01 00:28:21.735972 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:21.735983 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:28:21.735994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:28:21.736005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:28:21.736016 | orchestrator | 2026-01-01 00:28:21.736027 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-01 00:28:21.736038 | orchestrator | Thursday 01 January 2026 00:28:21 +0000 (0:00:00.775) 0:03:23.209 ****** 2026-01-01 00:28:21.736048 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:28:21.736061 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:28:21.736072 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:28:21.736083 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:28:21.736095 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:28:21.736116 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:28:28.779007 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:28:28.779156 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:28:28.779178 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:28:28.779192 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:28:28.779205 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:28:28.779219 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:28:28.779232 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:28:28.779246 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:28:28.779258 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:28:28.779271 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:28:28.779285 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:28:28.779300 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:28:28.779314 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:28:28.779364 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:28:28.779380 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:28:28.779393 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:28:28.779406 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:28:28.779420 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:28:28.779433 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:28:28.779447 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:28:28.779461 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:28:28.779475 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:28.779490 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:28:28.779527 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:28:28.779541 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:28:28.779552 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-01 00:28:28.779561 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-01 00:28:28.779569 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-01 00:28:28.779577 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:28.779586 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-01 00:28:28.779594 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-01 00:28:28.779601 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-01 00:28:28.779609 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-01 00:28:28.779617 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-01 00:28:28.779625 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-01 00:28:28.779632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-01 00:28:28.779640 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:28.779648 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:28.779656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-01 00:28:28.779664 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-01 00:28:28.779676 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-01 00:28:28.779684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-01 00:28:28.779691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-01 00:28:28.779749 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-01 00:28:28.779760 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-01 00:28:28.779768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-01 00:28:28.779791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-01 00:28:28.779805 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-01 00:28:28.779817 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-01 00:28:28.779830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-01 00:28:28.779843 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-01 00:28:28.779857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-01 00:28:28.779870 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-01 00:28:28.779884 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-01 00:28:28.779895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-01 00:28:28.779903 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-01 00:28:28.779911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-01 00:28:28.779919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-01 00:28:28.779927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-01 00:28:28.779935 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-01 00:28:28.779943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-01 00:28:28.779951 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-01 00:28:28.779959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-01 00:28:28.779967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-01 00:28:28.779975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-01 00:28:28.779982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-01 00:28:28.779990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-01 00:28:28.779998 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-01 00:28:28.780006 | orchestrator | 2026-01-01 00:28:28.780015 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-01 00:28:28.780023 | orchestrator | Thursday 01 January 2026 00:28:27 +0000 (0:00:05.852) 0:03:29.062 ****** 2026-01-01 00:28:28.780032 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780040 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780048 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780055 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780063 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780071 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780079 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-01 00:28:28.780087 | orchestrator | 2026-01-01 00:28:28.780096 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-01 00:28:28.780103 | orchestrator | Thursday 01 January 2026 00:28:28 +0000 (0:00:00.701) 0:03:29.763 ****** 2026-01-01 00:28:28.780111 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:28.780127 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:28.780135 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:28.780143 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:28.780151 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:28.780159 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:28.780172 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:28.780180 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:28.780188 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:28:28.780196 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:28:28.780211 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:28:43.582974 | orchestrator | 2026-01-01 00:28:43.583096 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-01 00:28:43.583114 | orchestrator | Thursday 01 January 2026 00:28:28 +0000 (0:00:00.561) 0:03:30.325 ****** 2026-01-01 00:28:43.583127 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:43.583140 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:43.583153 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:43.583164 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:43.583175 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:43.583187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-01 00:28:43.583198 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:43.583209 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:43.583220 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:28:43.583231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:28:43.583242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-01 00:28:43.583253 | orchestrator | 2026-01-01 00:28:43.583264 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-01 00:28:43.583275 | orchestrator | Thursday 01 January 2026 00:28:30 +0000 (0:00:01.584) 0:03:31.909 ****** 2026-01-01 00:28:43.583287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:28:43.583298 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:43.583309 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:28:43.583319 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:43.583330 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:28:43.583341 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:43.583353 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-01 00:28:43.583364 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:43.583375 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-01 00:28:43.583386 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-01 00:28:43.583397 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-01 00:28:43.583431 | orchestrator | 2026-01-01 00:28:43.583443 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-01 00:28:43.583454 | orchestrator | Thursday 01 January 2026 00:28:30 +0000 (0:00:00.622) 0:03:32.532 ****** 2026-01-01 00:28:43.583465 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:43.583476 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:43.583487 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:43.583499 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:43.583512 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:43.583525 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:43.583538 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:43.583550 | orchestrator | 2026-01-01 00:28:43.583564 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-01 00:28:43.583576 | orchestrator | Thursday 01 January 2026 00:28:31 +0000 (0:00:00.323) 0:03:32.856 ****** 2026-01-01 00:28:43.583589 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:43.583602 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:43.583615 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:43.583627 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:43.583639 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:43.583652 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:43.583664 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:43.583677 | orchestrator | 2026-01-01 00:28:43.583690 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-01 00:28:43.583747 | orchestrator | Thursday 01 January 2026 00:28:36 +0000 (0:00:05.668) 0:03:38.524 ****** 2026-01-01 00:28:43.583758 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-01 00:28:43.583770 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-01 00:28:43.583781 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:43.583792 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-01 00:28:43.583803 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:43.583814 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:43.583825 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-01 00:28:43.583836 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-01 00:28:43.583847 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:43.583858 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:43.583868 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-01 00:28:43.583879 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:43.583904 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-01 00:28:43.583916 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:43.583927 | orchestrator | 2026-01-01 00:28:43.583938 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-01 00:28:43.583949 | orchestrator | Thursday 01 January 2026 00:28:37 +0000 (0:00:00.336) 0:03:38.861 ****** 2026-01-01 00:28:43.583960 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-01 00:28:43.583970 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-01 00:28:43.583981 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-01 00:28:43.584009 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-01 00:28:43.584021 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-01 00:28:43.584032 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-01 00:28:43.584042 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-01 00:28:43.584053 | orchestrator | 2026-01-01 00:28:43.584064 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-01 00:28:43.584075 | orchestrator | Thursday 01 January 2026 00:28:38 +0000 (0:00:01.352) 0:03:40.213 ****** 2026-01-01 00:28:43.584088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:28:43.584101 | orchestrator | 2026-01-01 00:28:43.584121 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-01 00:28:43.584132 | orchestrator | Thursday 01 January 2026 00:28:39 +0000 (0:00:00.444) 0:03:40.658 ****** 2026-01-01 00:28:43.584143 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:43.584154 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:43.584164 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:43.584175 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:43.584186 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:43.584197 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:43.584207 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:43.584218 | orchestrator | 2026-01-01 00:28:43.584229 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-01 00:28:43.584240 | orchestrator | Thursday 01 January 2026 00:28:40 +0000 (0:00:01.391) 0:03:42.049 ****** 2026-01-01 00:28:43.584251 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:43.584262 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:43.584272 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:43.584283 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:43.584294 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:43.584304 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:43.584315 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:43.584326 | orchestrator | 2026-01-01 00:28:43.584337 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-01 00:28:43.584348 | orchestrator | Thursday 01 January 2026 00:28:41 +0000 (0:00:00.621) 0:03:42.671 ****** 2026-01-01 00:28:43.584359 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:43.584370 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:43.584381 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:43.584392 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:43.584402 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:43.584413 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:43.584424 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:43.584435 | orchestrator | 2026-01-01 00:28:43.584445 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-01 00:28:43.584456 | orchestrator | Thursday 01 January 2026 00:28:41 +0000 (0:00:00.681) 0:03:43.353 ****** 2026-01-01 00:28:43.584467 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:43.584478 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:43.584489 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:43.584499 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:43.584510 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:43.584521 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:43.584531 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:43.584542 | orchestrator | 2026-01-01 00:28:43.584553 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-01 00:28:43.584564 | orchestrator | Thursday 01 January 2026 00:28:42 +0000 (0:00:00.643) 0:03:43.997 ****** 2026-01-01 00:28:43.584580 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225912.88113, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:43.584596 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225933.0173085, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:43.584614 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225939.1982079, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:43.584649 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225949.0446076, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595354 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225940.3272164, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595503 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225927.7400186, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595519 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767225949.8588548, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595531 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595542 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595588 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595612 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595643 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595655 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595665 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 00:28:48.595676 | orchestrator | 2026-01-01 00:28:48.595728 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-01 00:28:48.595742 | orchestrator | Thursday 01 January 2026 00:28:43 +0000 (0:00:01.127) 0:03:45.124 ****** 2026-01-01 00:28:48.595752 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:48.595763 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:48.595773 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:48.595782 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:48.595792 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:48.595802 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:48.595812 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:48.595822 | orchestrator | 2026-01-01 00:28:48.595832 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-01 00:28:48.595844 | orchestrator | Thursday 01 January 2026 00:28:44 +0000 (0:00:01.098) 0:03:46.222 ****** 2026-01-01 00:28:48.595856 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:48.595867 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:48.595887 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:48.595899 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:48.595911 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:48.595922 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:48.595934 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:48.595945 | orchestrator | 2026-01-01 00:28:48.595957 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-01 00:28:48.595969 | orchestrator | Thursday 01 January 2026 00:28:45 +0000 (0:00:01.227) 0:03:47.450 ****** 2026-01-01 00:28:48.595980 | orchestrator | changed: [testbed-manager] 2026-01-01 00:28:48.595992 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:28:48.596003 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:28:48.596014 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:28:48.596026 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:28:48.596037 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:28:48.596048 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:28:48.596059 | orchestrator | 2026-01-01 00:28:48.596071 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-01 00:28:48.596082 | orchestrator | Thursday 01 January 2026 00:28:47 +0000 (0:00:01.211) 0:03:48.661 ****** 2026-01-01 00:28:48.596094 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:28:48.596106 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:28:48.596118 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:28:48.596129 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:28:48.596140 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:28:48.596152 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:28:48.596163 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:28:48.596174 | orchestrator | 2026-01-01 00:28:48.596192 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-01 00:28:48.596203 | orchestrator | Thursday 01 January 2026 00:28:47 +0000 (0:00:00.285) 0:03:48.947 ****** 2026-01-01 00:28:48.596213 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:28:48.596224 | orchestrator | ok: [testbed-manager] 2026-01-01 00:28:48.596233 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:28:48.596243 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:28:48.596253 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:28:48.596263 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:28:48.596273 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:28:48.596282 | orchestrator | 2026-01-01 00:28:48.596292 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-01 00:28:48.596302 | orchestrator | Thursday 01 January 2026 00:28:48 +0000 (0:00:00.788) 0:03:49.735 ****** 2026-01-01 00:28:48.596314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:28:48.596326 | orchestrator | 2026-01-01 00:28:48.596336 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-01 00:28:48.596353 | orchestrator | Thursday 01 January 2026 00:28:48 +0000 (0:00:00.411) 0:03:50.147 ****** 2026-01-01 00:30:11.392070 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392196 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:11.392213 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:11.392225 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:11.392236 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:11.392247 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:11.392258 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:11.392270 | orchestrator | 2026-01-01 00:30:11.392282 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-01 00:30:11.392295 | orchestrator | Thursday 01 January 2026 00:28:57 +0000 (0:00:09.141) 0:03:59.289 ****** 2026-01-01 00:30:11.392306 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392318 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.392329 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.392367 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.392378 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.392389 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.392400 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.392411 | orchestrator | 2026-01-01 00:30:11.392423 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-01 00:30:11.392434 | orchestrator | Thursday 01 January 2026 00:28:59 +0000 (0:00:01.287) 0:04:00.576 ****** 2026-01-01 00:30:11.392445 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392456 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.392467 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.392478 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.392488 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.392499 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.392510 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.392521 | orchestrator | 2026-01-01 00:30:11.392532 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-01 00:30:11.392543 | orchestrator | Thursday 01 January 2026 00:29:00 +0000 (0:00:01.127) 0:04:01.704 ****** 2026-01-01 00:30:11.392554 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392565 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.392576 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.392587 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.392600 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.392613 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.392627 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.392665 | orchestrator | 2026-01-01 00:30:11.392678 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-01 00:30:11.392693 | orchestrator | Thursday 01 January 2026 00:29:00 +0000 (0:00:00.325) 0:04:02.030 ****** 2026-01-01 00:30:11.392706 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392719 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.392732 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.392744 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.392756 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.392768 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.392780 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.392793 | orchestrator | 2026-01-01 00:30:11.392806 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-01 00:30:11.392819 | orchestrator | Thursday 01 January 2026 00:29:00 +0000 (0:00:00.368) 0:04:02.398 ****** 2026-01-01 00:30:11.392832 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392845 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.392858 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.392870 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.392883 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.392895 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.392908 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.392920 | orchestrator | 2026-01-01 00:30:11.392934 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-01 00:30:11.392946 | orchestrator | Thursday 01 January 2026 00:29:01 +0000 (0:00:00.334) 0:04:02.733 ****** 2026-01-01 00:30:11.392958 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.392969 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.392980 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.392991 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.393002 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.393013 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.393024 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.393035 | orchestrator | 2026-01-01 00:30:11.393046 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-01 00:30:11.393056 | orchestrator | Thursday 01 January 2026 00:29:07 +0000 (0:00:05.954) 0:04:08.687 ****** 2026-01-01 00:30:11.393069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:11.393092 | orchestrator | 2026-01-01 00:30:11.393103 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-01 00:30:11.393131 | orchestrator | Thursday 01 January 2026 00:29:07 +0000 (0:00:00.459) 0:04:09.147 ****** 2026-01-01 00:30:11.393142 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393153 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-01 00:30:11.393165 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:11.393176 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393187 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-01 00:30:11.393198 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393209 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-01 00:30:11.393233 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:11.393244 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:11.393255 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393266 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-01 00:30:11.393277 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393287 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-01 00:30:11.393298 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:11.393309 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393320 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:11.393349 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-01 00:30:11.393360 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:11.393371 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-01 00:30:11.393382 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-01 00:30:11.393393 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:11.393404 | orchestrator | 2026-01-01 00:30:11.393415 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-01 00:30:11.393426 | orchestrator | Thursday 01 January 2026 00:29:07 +0000 (0:00:00.332) 0:04:09.480 ****** 2026-01-01 00:30:11.393438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:11.393449 | orchestrator | 2026-01-01 00:30:11.393460 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-01 00:30:11.393471 | orchestrator | Thursday 01 January 2026 00:29:08 +0000 (0:00:00.406) 0:04:09.886 ****** 2026-01-01 00:30:11.393482 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-01 00:30:11.393493 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:11.393504 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-01 00:30:11.393515 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-01 00:30:11.393526 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:11.393537 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-01 00:30:11.393548 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:11.393558 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-01 00:30:11.393569 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:11.393580 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-01 00:30:11.393591 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:11.393601 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:11.393622 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-01 00:30:11.393634 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:11.393727 | orchestrator | 2026-01-01 00:30:11.393747 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-01 00:30:11.393758 | orchestrator | Thursday 01 January 2026 00:29:08 +0000 (0:00:00.376) 0:04:10.262 ****** 2026-01-01 00:30:11.393770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:11.393781 | orchestrator | 2026-01-01 00:30:11.393792 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-01 00:30:11.393803 | orchestrator | Thursday 01 January 2026 00:29:09 +0000 (0:00:00.417) 0:04:10.680 ****** 2026-01-01 00:30:11.393813 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:11.393824 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:11.393835 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:11.393846 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:11.393857 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:11.393868 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:11.393878 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:11.393889 | orchestrator | 2026-01-01 00:30:11.393900 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-01 00:30:11.393911 | orchestrator | Thursday 01 January 2026 00:29:46 +0000 (0:00:36.955) 0:04:47.635 ****** 2026-01-01 00:30:11.393922 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:11.393933 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:11.393943 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:11.393954 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:11.393965 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:11.393976 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:11.393986 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:11.393997 | orchestrator | 2026-01-01 00:30:11.394008 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-01 00:30:11.394082 | orchestrator | Thursday 01 January 2026 00:29:54 +0000 (0:00:08.451) 0:04:56.086 ****** 2026-01-01 00:30:11.394094 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:11.394105 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:11.394116 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:11.394127 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:11.394138 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:11.394149 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:11.394160 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:11.394171 | orchestrator | 2026-01-01 00:30:11.394182 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-01 00:30:11.394193 | orchestrator | Thursday 01 January 2026 00:30:02 +0000 (0:00:08.344) 0:05:04.431 ****** 2026-01-01 00:30:11.394203 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:11.394213 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:11.394223 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:11.394232 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:11.394242 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:11.394252 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:11.394261 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:11.394271 | orchestrator | 2026-01-01 00:30:11.394281 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-01 00:30:11.394291 | orchestrator | Thursday 01 January 2026 00:30:04 +0000 (0:00:01.930) 0:05:06.361 ****** 2026-01-01 00:30:11.394301 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:11.394310 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:11.394320 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:11.394329 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:11.394339 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:11.394349 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:11.394358 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:11.394368 | orchestrator | 2026-01-01 00:30:11.394386 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-01 00:30:23.232345 | orchestrator | Thursday 01 January 2026 00:30:11 +0000 (0:00:06.568) 0:05:12.929 ****** 2026-01-01 00:30:23.232474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:23.232485 | orchestrator | 2026-01-01 00:30:23.232493 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-01 00:30:23.232500 | orchestrator | Thursday 01 January 2026 00:30:11 +0000 (0:00:00.481) 0:05:13.411 ****** 2026-01-01 00:30:23.232506 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:23.232514 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:23.232521 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:23.232527 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:23.232533 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:23.232538 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:23.232545 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:23.232551 | orchestrator | 2026-01-01 00:30:23.232557 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-01 00:30:23.232563 | orchestrator | Thursday 01 January 2026 00:30:12 +0000 (0:00:00.781) 0:05:14.193 ****** 2026-01-01 00:30:23.232569 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:23.232577 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:23.232583 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:23.232589 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:23.232594 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:23.232601 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:23.232684 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:23.232692 | orchestrator | 2026-01-01 00:30:23.232699 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-01 00:30:23.232705 | orchestrator | Thursday 01 January 2026 00:30:14 +0000 (0:00:02.036) 0:05:16.229 ****** 2026-01-01 00:30:23.232712 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:30:23.232718 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:30:23.232725 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:30:23.232731 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:30:23.232737 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:30:23.232743 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:30:23.232750 | orchestrator | changed: [testbed-manager] 2026-01-01 00:30:23.232756 | orchestrator | 2026-01-01 00:30:23.232762 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-01 00:30:23.232769 | orchestrator | Thursday 01 January 2026 00:30:15 +0000 (0:00:00.811) 0:05:17.040 ****** 2026-01-01 00:30:23.232775 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:23.232781 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:23.232787 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:23.232793 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:23.232799 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:23.232806 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:23.232812 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:23.232818 | orchestrator | 2026-01-01 00:30:23.232824 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-01 00:30:23.232830 | orchestrator | Thursday 01 January 2026 00:30:15 +0000 (0:00:00.297) 0:05:17.337 ****** 2026-01-01 00:30:23.232837 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:23.232843 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:23.232849 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:23.232855 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:23.232862 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:23.232868 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:23.232875 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:23.232881 | orchestrator | 2026-01-01 00:30:23.232888 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-01 00:30:23.232922 | orchestrator | Thursday 01 January 2026 00:30:16 +0000 (0:00:00.434) 0:05:17.772 ****** 2026-01-01 00:30:23.232928 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:23.232935 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:23.232942 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:23.232948 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:23.232956 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:23.232962 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:23.232968 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:23.232975 | orchestrator | 2026-01-01 00:30:23.232981 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-01 00:30:23.232988 | orchestrator | Thursday 01 January 2026 00:30:16 +0000 (0:00:00.329) 0:05:18.102 ****** 2026-01-01 00:30:23.232995 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:23.233001 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:23.233007 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:23.233018 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:23.233025 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:23.233031 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:23.233037 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:23.233045 | orchestrator | 2026-01-01 00:30:23.233052 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-01 00:30:23.233060 | orchestrator | Thursday 01 January 2026 00:30:16 +0000 (0:00:00.298) 0:05:18.400 ****** 2026-01-01 00:30:23.233067 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:23.233073 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:23.233079 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:23.233086 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:23.233092 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:23.233098 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:23.233104 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:23.233110 | orchestrator | 2026-01-01 00:30:23.233117 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-01 00:30:23.233123 | orchestrator | Thursday 01 January 2026 00:30:17 +0000 (0:00:00.346) 0:05:18.747 ****** 2026-01-01 00:30:23.233129 | orchestrator | ok: [testbed-manager] =>  2026-01-01 00:30:23.233135 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233142 | orchestrator | ok: [testbed-node-3] =>  2026-01-01 00:30:23.233148 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233154 | orchestrator | ok: [testbed-node-4] =>  2026-01-01 00:30:23.233160 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233167 | orchestrator | ok: [testbed-node-5] =>  2026-01-01 00:30:23.233173 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233199 | orchestrator | ok: [testbed-node-0] =>  2026-01-01 00:30:23.233205 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233212 | orchestrator | ok: [testbed-node-1] =>  2026-01-01 00:30:23.233218 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233224 | orchestrator | ok: [testbed-node-2] =>  2026-01-01 00:30:23.233231 | orchestrator |  docker_version: 5:27.5.1 2026-01-01 00:30:23.233237 | orchestrator | 2026-01-01 00:30:23.233244 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-01 00:30:23.233250 | orchestrator | Thursday 01 January 2026 00:30:17 +0000 (0:00:00.298) 0:05:19.045 ****** 2026-01-01 00:30:23.233256 | orchestrator | ok: [testbed-manager] =>  2026-01-01 00:30:23.233263 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233269 | orchestrator | ok: [testbed-node-3] =>  2026-01-01 00:30:23.233275 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233282 | orchestrator | ok: [testbed-node-4] =>  2026-01-01 00:30:23.233288 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233294 | orchestrator | ok: [testbed-node-5] =>  2026-01-01 00:30:23.233301 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233307 | orchestrator | ok: [testbed-node-0] =>  2026-01-01 00:30:23.233313 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233325 | orchestrator | ok: [testbed-node-1] =>  2026-01-01 00:30:23.233331 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233337 | orchestrator | ok: [testbed-node-2] =>  2026-01-01 00:30:23.233343 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-01 00:30:23.233349 | orchestrator | 2026-01-01 00:30:23.233355 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-01 00:30:23.233361 | orchestrator | Thursday 01 January 2026 00:30:17 +0000 (0:00:00.307) 0:05:19.352 ****** 2026-01-01 00:30:23.233368 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:23.233374 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:23.233380 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:23.233387 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:23.233393 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:23.233400 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:23.233406 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:23.233412 | orchestrator | 2026-01-01 00:30:23.233418 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-01 00:30:23.233424 | orchestrator | Thursday 01 January 2026 00:30:18 +0000 (0:00:00.283) 0:05:19.635 ****** 2026-01-01 00:30:23.233430 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:23.233436 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:23.233442 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:23.233448 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:30:23.233454 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:30:23.233459 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:30:23.233465 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:30:23.233471 | orchestrator | 2026-01-01 00:30:23.233477 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-01 00:30:23.233483 | orchestrator | Thursday 01 January 2026 00:30:18 +0000 (0:00:00.295) 0:05:19.931 ****** 2026-01-01 00:30:23.233491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:30:23.233499 | orchestrator | 2026-01-01 00:30:23.233505 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-01 00:30:23.233511 | orchestrator | Thursday 01 January 2026 00:30:18 +0000 (0:00:00.468) 0:05:20.400 ****** 2026-01-01 00:30:23.233517 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:23.233524 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:23.233529 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:23.233536 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:23.233542 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:23.233548 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:23.233554 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:23.233560 | orchestrator | 2026-01-01 00:30:23.233567 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-01 00:30:23.233573 | orchestrator | Thursday 01 January 2026 00:30:19 +0000 (0:00:00.974) 0:05:21.374 ****** 2026-01-01 00:30:23.233580 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:30:23.233585 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:30:23.233591 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:30:23.233597 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:30:23.233603 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:30:23.233609 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:30:23.233616 | orchestrator | ok: [testbed-manager] 2026-01-01 00:30:23.233636 | orchestrator | 2026-01-01 00:30:23.233642 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-01 00:30:23.233653 | orchestrator | Thursday 01 January 2026 00:30:22 +0000 (0:00:02.968) 0:05:24.343 ****** 2026-01-01 00:30:23.233659 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-01 00:30:23.233666 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-01 00:30:23.233671 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-01 00:30:23.233733 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-01 00:30:23.233739 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-01 00:30:23.233746 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-01 00:30:23.233752 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:30:23.233758 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-01 00:30:23.233764 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-01 00:30:23.233770 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-01 00:30:23.233776 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:30:23.233782 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-01 00:30:23.233788 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-01 00:30:23.233794 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-01 00:30:23.233800 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:30:23.233806 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-01 00:30:23.233817 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-01 00:31:27.257116 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:27.257235 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-01 00:31:27.257251 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-01 00:31:27.257263 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-01 00:31:27.257274 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-01 00:31:27.257285 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:27.257297 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:27.257308 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-01 00:31:27.257319 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-01 00:31:27.257330 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-01 00:31:27.257341 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:27.257352 | orchestrator | 2026-01-01 00:31:27.257365 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-01 00:31:27.257377 | orchestrator | Thursday 01 January 2026 00:30:23 +0000 (0:00:00.665) 0:05:25.008 ****** 2026-01-01 00:31:27.257389 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.257400 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.257411 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.257422 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.257433 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.257444 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.257455 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.257466 | orchestrator | 2026-01-01 00:31:27.257477 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-01 00:31:27.257488 | orchestrator | Thursday 01 January 2026 00:30:30 +0000 (0:00:06.974) 0:05:31.983 ****** 2026-01-01 00:31:27.257499 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.257510 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.257521 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.257531 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.257542 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.257606 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.257619 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.257630 | orchestrator | 2026-01-01 00:31:27.257641 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-01 00:31:27.257654 | orchestrator | Thursday 01 January 2026 00:30:31 +0000 (0:00:01.107) 0:05:33.090 ****** 2026-01-01 00:31:27.257667 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.257680 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.257693 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.257705 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.257717 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.257752 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.257765 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.257777 | orchestrator | 2026-01-01 00:31:27.257789 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-01 00:31:27.257803 | orchestrator | Thursday 01 January 2026 00:30:40 +0000 (0:00:09.223) 0:05:42.314 ****** 2026-01-01 00:31:27.257815 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.257827 | orchestrator | changed: [testbed-manager] 2026-01-01 00:31:27.257841 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.257853 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.257866 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.257878 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.257891 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.257904 | orchestrator | 2026-01-01 00:31:27.257916 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-01 00:31:27.257927 | orchestrator | Thursday 01 January 2026 00:30:43 +0000 (0:00:03.213) 0:05:45.527 ****** 2026-01-01 00:31:27.257938 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.257949 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.257959 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.257970 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.257981 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.257992 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.258002 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.258063 | orchestrator | 2026-01-01 00:31:27.258077 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-01 00:31:27.258088 | orchestrator | Thursday 01 January 2026 00:30:45 +0000 (0:00:01.443) 0:05:46.971 ****** 2026-01-01 00:31:27.258099 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.258110 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.258121 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.258132 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.258143 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.258154 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.258179 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.258190 | orchestrator | 2026-01-01 00:31:27.258202 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-01 00:31:27.258213 | orchestrator | Thursday 01 January 2026 00:30:47 +0000 (0:00:01.691) 0:05:48.663 ****** 2026-01-01 00:31:27.258224 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:27.258235 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:27.258246 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:27.258257 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:27.258267 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:27.258279 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:27.258290 | orchestrator | changed: [testbed-manager] 2026-01-01 00:31:27.258301 | orchestrator | 2026-01-01 00:31:27.258312 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-01 00:31:27.258324 | orchestrator | Thursday 01 January 2026 00:30:47 +0000 (0:00:00.597) 0:05:49.260 ****** 2026-01-01 00:31:27.258335 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.258346 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.258357 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.258367 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.258378 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.258389 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.258400 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.258411 | orchestrator | 2026-01-01 00:31:27.258422 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-01 00:31:27.258451 | orchestrator | Thursday 01 January 2026 00:30:57 +0000 (0:00:09.941) 0:05:59.202 ****** 2026-01-01 00:31:27.258463 | orchestrator | changed: [testbed-manager] 2026-01-01 00:31:27.258474 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.258493 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.258504 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.258515 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.258538 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.258575 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.258588 | orchestrator | 2026-01-01 00:31:27.258599 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-01 00:31:27.258610 | orchestrator | Thursday 01 January 2026 00:30:58 +0000 (0:00:01.008) 0:06:00.211 ****** 2026-01-01 00:31:27.258621 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.258632 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.258643 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.258653 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.258664 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.258675 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.258686 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.258697 | orchestrator | 2026-01-01 00:31:27.258707 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-01 00:31:27.258718 | orchestrator | Thursday 01 January 2026 00:31:08 +0000 (0:00:09.536) 0:06:09.748 ****** 2026-01-01 00:31:27.258729 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.258740 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.258751 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.258762 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.258772 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.258783 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.258794 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.258805 | orchestrator | 2026-01-01 00:31:27.258816 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-01 00:31:27.258827 | orchestrator | Thursday 01 January 2026 00:31:20 +0000 (0:00:11.898) 0:06:21.646 ****** 2026-01-01 00:31:27.258838 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-01 00:31:27.258849 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-01 00:31:27.258860 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-01 00:31:27.258871 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-01 00:31:27.258881 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-01 00:31:27.258892 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-01 00:31:27.258903 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-01 00:31:27.258914 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-01 00:31:27.258925 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-01 00:31:27.258936 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-01 00:31:27.258947 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-01 00:31:27.258958 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-01 00:31:27.258969 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-01 00:31:27.258979 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-01 00:31:27.258990 | orchestrator | 2026-01-01 00:31:27.259001 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-01 00:31:27.259012 | orchestrator | Thursday 01 January 2026 00:31:21 +0000 (0:00:01.345) 0:06:22.992 ****** 2026-01-01 00:31:27.259023 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:27.259034 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:27.259045 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:27.259056 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:27.259067 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:27.259077 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:27.259088 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:27.259099 | orchestrator | 2026-01-01 00:31:27.259110 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-01 00:31:27.259121 | orchestrator | Thursday 01 January 2026 00:31:22 +0000 (0:00:00.646) 0:06:23.639 ****** 2026-01-01 00:31:27.259140 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:27.259151 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:27.259162 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:27.259172 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:27.259183 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:27.259194 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:27.259205 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:27.259215 | orchestrator | 2026-01-01 00:31:27.259226 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-01 00:31:27.259244 | orchestrator | Thursday 01 January 2026 00:31:26 +0000 (0:00:04.096) 0:06:27.735 ****** 2026-01-01 00:31:27.259255 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:27.259266 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:27.259277 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:27.259288 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:27.259299 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:27.259309 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:27.259320 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:27.259331 | orchestrator | 2026-01-01 00:31:27.259343 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-01 00:31:27.259354 | orchestrator | Thursday 01 January 2026 00:31:26 +0000 (0:00:00.564) 0:06:28.300 ****** 2026-01-01 00:31:27.259365 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-01 00:31:27.259376 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-01 00:31:27.259387 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:27.259398 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-01 00:31:27.259409 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-01 00:31:27.259420 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:27.259431 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-01 00:31:27.259442 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-01 00:31:27.259453 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:27.259472 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-01 00:31:47.191721 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-01 00:31:47.191803 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:47.191813 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-01 00:31:47.191820 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-01 00:31:47.191827 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:47.191834 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-01 00:31:47.191841 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-01 00:31:47.191847 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:47.191855 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-01 00:31:47.191862 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-01 00:31:47.191868 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:47.191875 | orchestrator | 2026-01-01 00:31:47.191884 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-01 00:31:47.191892 | orchestrator | Thursday 01 January 2026 00:31:27 +0000 (0:00:00.763) 0:06:29.064 ****** 2026-01-01 00:31:47.191898 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:47.191905 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:47.191911 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:47.191920 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:47.191926 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:47.191932 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:47.191940 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:47.191946 | orchestrator | 2026-01-01 00:31:47.191952 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-01 00:31:47.191980 | orchestrator | Thursday 01 January 2026 00:31:27 +0000 (0:00:00.489) 0:06:29.553 ****** 2026-01-01 00:31:47.191986 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:47.191993 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:47.192000 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:47.192006 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:47.192013 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:47.192019 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:47.192026 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:47.192032 | orchestrator | 2026-01-01 00:31:47.192038 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-01 00:31:47.192045 | orchestrator | Thursday 01 January 2026 00:31:28 +0000 (0:00:00.497) 0:06:30.051 ****** 2026-01-01 00:31:47.192051 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:47.192058 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:31:47.192064 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:31:47.192071 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:31:47.192078 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:31:47.192084 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:31:47.192092 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:31:47.192098 | orchestrator | 2026-01-01 00:31:47.192106 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-01 00:31:47.192112 | orchestrator | Thursday 01 January 2026 00:31:29 +0000 (0:00:00.526) 0:06:30.577 ****** 2026-01-01 00:31:47.192118 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192125 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:31:47.192131 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:31:47.192137 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:31:47.192143 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:31:47.192149 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:31:47.192155 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:31:47.192163 | orchestrator | 2026-01-01 00:31:47.192167 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-01 00:31:47.192171 | orchestrator | Thursday 01 January 2026 00:31:31 +0000 (0:00:02.102) 0:06:32.679 ****** 2026-01-01 00:31:47.192175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:31:47.192181 | orchestrator | 2026-01-01 00:31:47.192185 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-01 00:31:47.192189 | orchestrator | Thursday 01 January 2026 00:31:32 +0000 (0:00:00.927) 0:06:33.606 ****** 2026-01-01 00:31:47.192193 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192196 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:47.192200 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:47.192204 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:47.192208 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:47.192212 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:47.192216 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:47.192219 | orchestrator | 2026-01-01 00:31:47.192223 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-01 00:31:47.192227 | orchestrator | Thursday 01 January 2026 00:31:32 +0000 (0:00:00.903) 0:06:34.510 ****** 2026-01-01 00:31:47.192231 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192235 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:47.192239 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:47.192242 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:47.192246 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:47.192250 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:47.192253 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:47.192257 | orchestrator | 2026-01-01 00:31:47.192261 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-01 00:31:47.192269 | orchestrator | Thursday 01 January 2026 00:31:33 +0000 (0:00:00.874) 0:06:35.385 ****** 2026-01-01 00:31:47.192273 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192277 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:47.192282 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:47.192286 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:47.192290 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:47.192331 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:47.192336 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:47.192340 | orchestrator | 2026-01-01 00:31:47.192345 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-01 00:31:47.192361 | orchestrator | Thursday 01 January 2026 00:31:35 +0000 (0:00:01.607) 0:06:36.992 ****** 2026-01-01 00:31:47.192366 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:31:47.192370 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:31:47.192374 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:31:47.192378 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:31:47.192382 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:31:47.192385 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:31:47.192389 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:31:47.192393 | orchestrator | 2026-01-01 00:31:47.192397 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-01 00:31:47.192401 | orchestrator | Thursday 01 January 2026 00:31:36 +0000 (0:00:01.444) 0:06:38.437 ****** 2026-01-01 00:31:47.192404 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192408 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:47.192412 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:47.192416 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:47.192419 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:47.192423 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:47.192427 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:47.192431 | orchestrator | 2026-01-01 00:31:47.192434 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-01 00:31:47.192438 | orchestrator | Thursday 01 January 2026 00:31:38 +0000 (0:00:01.369) 0:06:39.807 ****** 2026-01-01 00:31:47.192442 | orchestrator | changed: [testbed-manager] 2026-01-01 00:31:47.192446 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:31:47.192449 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:31:47.192453 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:31:47.192457 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:31:47.192460 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:31:47.192464 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:31:47.192468 | orchestrator | 2026-01-01 00:31:47.192472 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-01 00:31:47.192476 | orchestrator | Thursday 01 January 2026 00:31:39 +0000 (0:00:01.449) 0:06:41.257 ****** 2026-01-01 00:31:47.192480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:31:47.192483 | orchestrator | 2026-01-01 00:31:47.192487 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-01 00:31:47.192491 | orchestrator | Thursday 01 January 2026 00:31:40 +0000 (0:00:01.091) 0:06:42.349 ****** 2026-01-01 00:31:47.192495 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192499 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:31:47.192502 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:31:47.192506 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:31:47.192510 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:31:47.192514 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:31:47.192517 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:31:47.192521 | orchestrator | 2026-01-01 00:31:47.192561 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-01 00:31:47.192566 | orchestrator | Thursday 01 January 2026 00:31:42 +0000 (0:00:01.383) 0:06:43.732 ****** 2026-01-01 00:31:47.192578 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192582 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:31:47.192586 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:31:47.192589 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:31:47.192593 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:31:47.192597 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:31:47.192600 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:31:47.192604 | orchestrator | 2026-01-01 00:31:47.192608 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-01 00:31:47.192611 | orchestrator | Thursday 01 January 2026 00:31:43 +0000 (0:00:01.133) 0:06:44.866 ****** 2026-01-01 00:31:47.192615 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192619 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:31:47.192623 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:31:47.192626 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:31:47.192630 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:31:47.192634 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:31:47.192637 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:31:47.192641 | orchestrator | 2026-01-01 00:31:47.192645 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-01 00:31:47.192648 | orchestrator | Thursday 01 January 2026 00:31:44 +0000 (0:00:01.140) 0:06:46.007 ****** 2026-01-01 00:31:47.192652 | orchestrator | ok: [testbed-manager] 2026-01-01 00:31:47.192656 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:31:47.192660 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:31:47.192663 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:31:47.192667 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:31:47.192671 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:31:47.192674 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:31:47.192678 | orchestrator | 2026-01-01 00:31:47.192684 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-01 00:31:47.192688 | orchestrator | Thursday 01 January 2026 00:31:45 +0000 (0:00:01.461) 0:06:47.468 ****** 2026-01-01 00:31:47.192692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:31:47.192696 | orchestrator | 2026-01-01 00:31:47.192700 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:31:47.192703 | orchestrator | Thursday 01 January 2026 00:31:46 +0000 (0:00:00.921) 0:06:48.390 ****** 2026-01-01 00:31:47.192707 | orchestrator | 2026-01-01 00:31:47.192711 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:31:47.192715 | orchestrator | Thursday 01 January 2026 00:31:46 +0000 (0:00:00.042) 0:06:48.433 ****** 2026-01-01 00:31:47.192718 | orchestrator | 2026-01-01 00:31:47.192722 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:31:47.192726 | orchestrator | Thursday 01 January 2026 00:31:46 +0000 (0:00:00.048) 0:06:48.481 ****** 2026-01-01 00:31:47.192729 | orchestrator | 2026-01-01 00:31:47.192733 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:31:47.192740 | orchestrator | Thursday 01 January 2026 00:31:46 +0000 (0:00:00.041) 0:06:48.522 ****** 2026-01-01 00:32:13.654240 | orchestrator | 2026-01-01 00:32:13.654389 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:32:13.654407 | orchestrator | Thursday 01 January 2026 00:31:47 +0000 (0:00:00.040) 0:06:48.562 ****** 2026-01-01 00:32:13.654419 | orchestrator | 2026-01-01 00:32:13.654431 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:32:13.654442 | orchestrator | Thursday 01 January 2026 00:31:47 +0000 (0:00:00.054) 0:06:48.616 ****** 2026-01-01 00:32:13.654454 | orchestrator | 2026-01-01 00:32:13.654465 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-01 00:32:13.654476 | orchestrator | Thursday 01 January 2026 00:31:47 +0000 (0:00:00.070) 0:06:48.687 ****** 2026-01-01 00:32:13.654550 | orchestrator | 2026-01-01 00:32:13.654563 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-01 00:32:13.654574 | orchestrator | Thursday 01 January 2026 00:31:47 +0000 (0:00:00.040) 0:06:48.727 ****** 2026-01-01 00:32:13.654585 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:13.654598 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:13.654609 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:13.654620 | orchestrator | 2026-01-01 00:32:13.654631 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-01 00:32:13.654642 | orchestrator | Thursday 01 January 2026 00:31:48 +0000 (0:00:01.198) 0:06:49.925 ****** 2026-01-01 00:32:13.654654 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:13.654666 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:13.654677 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:13.654688 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:13.654699 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:13.654709 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:13.654720 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:13.654731 | orchestrator | 2026-01-01 00:32:13.654742 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-01 00:32:13.654756 | orchestrator | Thursday 01 January 2026 00:31:50 +0000 (0:00:01.685) 0:06:51.611 ****** 2026-01-01 00:32:13.654768 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:13.654781 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:13.654794 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:13.654806 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:13.654818 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:13.654830 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:13.654842 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:13.654855 | orchestrator | 2026-01-01 00:32:13.654868 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-01 00:32:13.654881 | orchestrator | Thursday 01 January 2026 00:31:51 +0000 (0:00:01.191) 0:06:52.803 ****** 2026-01-01 00:32:13.654894 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:13.654905 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:13.654915 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:13.654926 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:13.654937 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:13.654948 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:13.654959 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:13.654970 | orchestrator | 2026-01-01 00:32:13.654981 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-01 00:32:13.654992 | orchestrator | Thursday 01 January 2026 00:31:53 +0000 (0:00:02.387) 0:06:55.190 ****** 2026-01-01 00:32:13.655003 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:13.655014 | orchestrator | 2026-01-01 00:32:13.655025 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-01 00:32:13.655036 | orchestrator | Thursday 01 January 2026 00:31:53 +0000 (0:00:00.089) 0:06:55.280 ****** 2026-01-01 00:32:13.655046 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:13.655057 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:13.655071 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:13.655090 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:13.655108 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:13.655134 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:13.655154 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:13.655170 | orchestrator | 2026-01-01 00:32:13.655187 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-01 00:32:13.655205 | orchestrator | Thursday 01 January 2026 00:31:54 +0000 (0:00:00.927) 0:06:56.207 ****** 2026-01-01 00:32:13.655223 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:13.655243 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:13.655260 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:13.655290 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:13.655307 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:13.655324 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:13.655341 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:13.655360 | orchestrator | 2026-01-01 00:32:13.655379 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-01 00:32:13.655399 | orchestrator | Thursday 01 January 2026 00:31:55 +0000 (0:00:00.492) 0:06:56.699 ****** 2026-01-01 00:32:13.655420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:32:13.655441 | orchestrator | 2026-01-01 00:32:13.655459 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-01 00:32:13.655479 | orchestrator | Thursday 01 January 2026 00:31:56 +0000 (0:00:00.901) 0:06:57.601 ****** 2026-01-01 00:32:13.655540 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:13.655559 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:13.655577 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:13.655594 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:13.655613 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:13.655633 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:13.655652 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:13.655672 | orchestrator | 2026-01-01 00:32:13.655684 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-01 00:32:13.655695 | orchestrator | Thursday 01 January 2026 00:31:56 +0000 (0:00:00.811) 0:06:58.413 ****** 2026-01-01 00:32:13.655706 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-01 00:32:13.655742 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-01 00:32:13.655754 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-01 00:32:13.655765 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-01 00:32:13.655776 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-01 00:32:13.655787 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-01 00:32:13.655797 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-01 00:32:13.655808 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-01 00:32:13.655820 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-01 00:32:13.655830 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-01 00:32:13.655841 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-01 00:32:13.655852 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-01 00:32:13.655862 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-01 00:32:13.655873 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-01 00:32:13.655884 | orchestrator | 2026-01-01 00:32:13.655895 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-01 00:32:13.655906 | orchestrator | Thursday 01 January 2026 00:31:59 +0000 (0:00:02.395) 0:07:00.808 ****** 2026-01-01 00:32:13.655917 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:13.655928 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:13.655939 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:13.655949 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:13.655960 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:13.655971 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:13.655982 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:13.655992 | orchestrator | 2026-01-01 00:32:13.656004 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-01 00:32:13.656015 | orchestrator | Thursday 01 January 2026 00:31:59 +0000 (0:00:00.700) 0:07:01.509 ****** 2026-01-01 00:32:13.656028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:32:13.656057 | orchestrator | 2026-01-01 00:32:13.656077 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-01 00:32:13.656094 | orchestrator | Thursday 01 January 2026 00:32:00 +0000 (0:00:00.860) 0:07:02.369 ****** 2026-01-01 00:32:13.656113 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:13.656133 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:13.656152 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:13.656171 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:13.656189 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:13.656209 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:13.656228 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:13.656246 | orchestrator | 2026-01-01 00:32:13.656266 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-01 00:32:13.656283 | orchestrator | Thursday 01 January 2026 00:32:01 +0000 (0:00:00.870) 0:07:03.239 ****** 2026-01-01 00:32:13.656300 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:13.656317 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:13.656335 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:13.656354 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:13.656374 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:13.656393 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:13.656413 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:13.656432 | orchestrator | 2026-01-01 00:32:13.656450 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-01 00:32:13.656468 | orchestrator | Thursday 01 January 2026 00:32:02 +0000 (0:00:01.100) 0:07:04.340 ****** 2026-01-01 00:32:13.656487 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:13.656532 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:13.656551 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:13.656570 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:13.656588 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:13.656599 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:13.656610 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:13.656621 | orchestrator | 2026-01-01 00:32:13.656632 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-01 00:32:13.656642 | orchestrator | Thursday 01 January 2026 00:32:03 +0000 (0:00:00.538) 0:07:04.879 ****** 2026-01-01 00:32:13.656671 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:13.656682 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:13.656695 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:13.656714 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:13.656732 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:13.656750 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:13.656770 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:13.656789 | orchestrator | 2026-01-01 00:32:13.656807 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-01 00:32:13.656825 | orchestrator | Thursday 01 January 2026 00:32:04 +0000 (0:00:01.562) 0:07:06.441 ****** 2026-01-01 00:32:13.656846 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:13.656864 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:13.656883 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:13.656901 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:13.656920 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:13.656939 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:13.656960 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:13.656980 | orchestrator | 2026-01-01 00:32:13.657000 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-01 00:32:13.657020 | orchestrator | Thursday 01 January 2026 00:32:05 +0000 (0:00:00.567) 0:07:07.009 ****** 2026-01-01 00:32:13.657040 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:13.657058 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:13.657077 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:13.657094 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:13.657132 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:13.657150 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:13.657179 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.633206 | orchestrator | 2026-01-01 00:32:47.633365 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-01 00:32:47.633390 | orchestrator | Thursday 01 January 2026 00:32:13 +0000 (0:00:08.184) 0:07:15.193 ****** 2026-01-01 00:32:47.633408 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.633428 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.633447 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.633508 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.633525 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.633542 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.633558 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.633575 | orchestrator | 2026-01-01 00:32:47.633592 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-01 00:32:47.633608 | orchestrator | Thursday 01 January 2026 00:32:15 +0000 (0:00:01.699) 0:07:16.893 ****** 2026-01-01 00:32:47.633624 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.633640 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.633657 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.633673 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.633688 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.633705 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.633719 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.633731 | orchestrator | 2026-01-01 00:32:47.633742 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-01 00:32:47.633754 | orchestrator | Thursday 01 January 2026 00:32:17 +0000 (0:00:01.731) 0:07:18.625 ****** 2026-01-01 00:32:47.633766 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.633777 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.633788 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.633800 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.633810 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.633822 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.633833 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.633844 | orchestrator | 2026-01-01 00:32:47.633855 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-01 00:32:47.633866 | orchestrator | Thursday 01 January 2026 00:32:18 +0000 (0:00:01.790) 0:07:20.415 ****** 2026-01-01 00:32:47.633877 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.633888 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.633899 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.633910 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.633920 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.633931 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.633942 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.633953 | orchestrator | 2026-01-01 00:32:47.633964 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-01 00:32:47.633975 | orchestrator | Thursday 01 January 2026 00:32:19 +0000 (0:00:00.872) 0:07:21.288 ****** 2026-01-01 00:32:47.633987 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:47.633998 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:47.634009 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:47.634104 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:47.634122 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:47.634139 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:47.634156 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:47.634173 | orchestrator | 2026-01-01 00:32:47.634189 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-01 00:32:47.634206 | orchestrator | Thursday 01 January 2026 00:32:20 +0000 (0:00:01.119) 0:07:22.408 ****** 2026-01-01 00:32:47.634224 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:47.634239 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:47.634293 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:47.634312 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:47.634327 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:47.634337 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:47.634347 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:47.634356 | orchestrator | 2026-01-01 00:32:47.634366 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-01 00:32:47.634376 | orchestrator | Thursday 01 January 2026 00:32:21 +0000 (0:00:00.542) 0:07:22.950 ****** 2026-01-01 00:32:47.634385 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.634394 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.634404 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.634413 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.634423 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.634432 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.634441 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.634479 | orchestrator | 2026-01-01 00:32:47.634491 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-01 00:32:47.634518 | orchestrator | Thursday 01 January 2026 00:32:21 +0000 (0:00:00.577) 0:07:23.527 ****** 2026-01-01 00:32:47.634528 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.634538 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.634547 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.634556 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.634566 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.634575 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.634585 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.634594 | orchestrator | 2026-01-01 00:32:47.634604 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-01 00:32:47.634613 | orchestrator | Thursday 01 January 2026 00:32:22 +0000 (0:00:00.573) 0:07:24.101 ****** 2026-01-01 00:32:47.634623 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.634632 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.634642 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.634651 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.634661 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.634670 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.634679 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.634689 | orchestrator | 2026-01-01 00:32:47.634698 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-01 00:32:47.634708 | orchestrator | Thursday 01 January 2026 00:32:23 +0000 (0:00:00.877) 0:07:24.978 ****** 2026-01-01 00:32:47.634717 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.634727 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.634736 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.634746 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.634755 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.634765 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.634774 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.634784 | orchestrator | 2026-01-01 00:32:47.634814 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-01 00:32:47.634824 | orchestrator | Thursday 01 January 2026 00:32:29 +0000 (0:00:05.863) 0:07:30.842 ****** 2026-01-01 00:32:47.634834 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:32:47.634844 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:32:47.634860 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:32:47.634884 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:32:47.634901 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:32:47.634916 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:32:47.634932 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:32:47.634949 | orchestrator | 2026-01-01 00:32:47.634965 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-01 00:32:47.634981 | orchestrator | Thursday 01 January 2026 00:32:29 +0000 (0:00:00.545) 0:07:31.387 ****** 2026-01-01 00:32:47.634992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:32:47.635016 | orchestrator | 2026-01-01 00:32:47.635026 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-01 00:32:47.635035 | orchestrator | Thursday 01 January 2026 00:32:30 +0000 (0:00:01.080) 0:07:32.467 ****** 2026-01-01 00:32:47.635048 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.635064 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.635089 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.635107 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.635123 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.635139 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.635155 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.635171 | orchestrator | 2026-01-01 00:32:47.635188 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-01 00:32:47.635205 | orchestrator | Thursday 01 January 2026 00:32:32 +0000 (0:00:01.982) 0:07:34.450 ****** 2026-01-01 00:32:47.635221 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.635234 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.635244 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.635254 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.635270 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.635286 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.635302 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.635317 | orchestrator | 2026-01-01 00:32:47.635332 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-01 00:32:47.635348 | orchestrator | Thursday 01 January 2026 00:32:34 +0000 (0:00:01.160) 0:07:35.611 ****** 2026-01-01 00:32:47.635366 | orchestrator | ok: [testbed-manager] 2026-01-01 00:32:47.635383 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:32:47.635400 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:32:47.635417 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:32:47.635434 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:32:47.635483 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:32:47.635495 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:32:47.635505 | orchestrator | 2026-01-01 00:32:47.635515 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-01 00:32:47.635524 | orchestrator | Thursday 01 January 2026 00:32:34 +0000 (0:00:00.851) 0:07:36.463 ****** 2026-01-01 00:32:47.635534 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635547 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635557 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635566 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635576 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635587 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635605 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-01 00:32:47.635621 | orchestrator | 2026-01-01 00:32:47.635637 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-01 00:32:47.635654 | orchestrator | Thursday 01 January 2026 00:32:36 +0000 (0:00:01.924) 0:07:38.387 ****** 2026-01-01 00:32:47.635672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:32:47.635703 | orchestrator | 2026-01-01 00:32:47.635721 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-01 00:32:47.635737 | orchestrator | Thursday 01 January 2026 00:32:37 +0000 (0:00:00.870) 0:07:39.258 ****** 2026-01-01 00:32:47.635753 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:32:47.635770 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:32:47.635786 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:32:47.635804 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:32:47.635821 | orchestrator | changed: [testbed-manager] 2026-01-01 00:32:47.635837 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:32:47.635854 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:32:47.635871 | orchestrator | 2026-01-01 00:32:47.635898 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-01 00:33:19.732148 | orchestrator | Thursday 01 January 2026 00:32:47 +0000 (0:00:09.916) 0:07:49.174 ****** 2026-01-01 00:33:19.732289 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:19.732307 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:19.732319 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:19.732330 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:19.732341 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:19.732353 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:19.732364 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:19.732375 | orchestrator | 2026-01-01 00:33:19.732388 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-01 00:33:19.732400 | orchestrator | Thursday 01 January 2026 00:32:49 +0000 (0:00:02.046) 0:07:51.220 ****** 2026-01-01 00:33:19.732411 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:19.732452 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:19.732464 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:19.732475 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:19.732486 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:19.732497 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:19.732508 | orchestrator | 2026-01-01 00:33:19.732519 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-01 00:33:19.732532 | orchestrator | Thursday 01 January 2026 00:32:50 +0000 (0:00:01.301) 0:07:52.521 ****** 2026-01-01 00:33:19.732543 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.732556 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.732592 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.732604 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.732615 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.732626 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.732639 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.732651 | orchestrator | 2026-01-01 00:33:19.732664 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-01 00:33:19.732677 | orchestrator | 2026-01-01 00:33:19.732689 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-01 00:33:19.732702 | orchestrator | Thursday 01 January 2026 00:32:52 +0000 (0:00:01.233) 0:07:53.755 ****** 2026-01-01 00:33:19.732714 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:19.732727 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:33:19.732740 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:33:19.732753 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:33:19.732765 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:33:19.732776 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:33:19.732789 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:33:19.732801 | orchestrator | 2026-01-01 00:33:19.732814 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-01 00:33:19.732828 | orchestrator | 2026-01-01 00:33:19.732840 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-01 00:33:19.732858 | orchestrator | Thursday 01 January 2026 00:32:52 +0000 (0:00:00.793) 0:07:54.548 ****** 2026-01-01 00:33:19.732913 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.732935 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.732954 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.732973 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.732986 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.732997 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.733008 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.733020 | orchestrator | 2026-01-01 00:33:19.733031 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-01 00:33:19.733042 | orchestrator | Thursday 01 January 2026 00:32:54 +0000 (0:00:01.392) 0:07:55.941 ****** 2026-01-01 00:33:19.733052 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:19.733063 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:19.733074 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:19.733085 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:19.733096 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:19.733107 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:19.733117 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:19.733128 | orchestrator | 2026-01-01 00:33:19.733139 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-01 00:33:19.733150 | orchestrator | Thursday 01 January 2026 00:32:55 +0000 (0:00:01.472) 0:07:57.413 ****** 2026-01-01 00:33:19.733161 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:33:19.733172 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:33:19.733182 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:33:19.733193 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:33:19.733204 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:33:19.733215 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:33:19.733225 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:33:19.733236 | orchestrator | 2026-01-01 00:33:19.733261 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-01 00:33:19.733279 | orchestrator | Thursday 01 January 2026 00:32:56 +0000 (0:00:00.531) 0:07:57.945 ****** 2026-01-01 00:33:19.733291 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:33:19.733304 | orchestrator | 2026-01-01 00:33:19.733316 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-01 00:33:19.733327 | orchestrator | Thursday 01 January 2026 00:32:57 +0000 (0:00:01.026) 0:07:58.972 ****** 2026-01-01 00:33:19.733339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:33:19.733353 | orchestrator | 2026-01-01 00:33:19.733364 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-01 00:33:19.733375 | orchestrator | Thursday 01 January 2026 00:32:58 +0000 (0:00:00.791) 0:07:59.764 ****** 2026-01-01 00:33:19.733386 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.733397 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.733408 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.733443 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.733455 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.733466 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.733477 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.733488 | orchestrator | 2026-01-01 00:33:19.733519 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-01 00:33:19.733531 | orchestrator | Thursday 01 January 2026 00:33:07 +0000 (0:00:08.917) 0:08:08.682 ****** 2026-01-01 00:33:19.733542 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.733553 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.733564 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.733585 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.733596 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.733607 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.733618 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.733629 | orchestrator | 2026-01-01 00:33:19.733640 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-01 00:33:19.733651 | orchestrator | Thursday 01 January 2026 00:33:07 +0000 (0:00:00.866) 0:08:09.549 ****** 2026-01-01 00:33:19.733662 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.733672 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.733683 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.733694 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.733705 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.733716 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.733726 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.733737 | orchestrator | 2026-01-01 00:33:19.733748 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-01 00:33:19.733759 | orchestrator | Thursday 01 January 2026 00:33:09 +0000 (0:00:01.394) 0:08:10.944 ****** 2026-01-01 00:33:19.733770 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.733781 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.733791 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.733802 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.733813 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.733824 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.733834 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.733845 | orchestrator | 2026-01-01 00:33:19.733856 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-01 00:33:19.733867 | orchestrator | Thursday 01 January 2026 00:33:11 +0000 (0:00:02.606) 0:08:13.550 ****** 2026-01-01 00:33:19.733878 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.733891 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.733909 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.733928 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.733947 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.733967 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.733986 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.734003 | orchestrator | 2026-01-01 00:33:19.734079 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-01 00:33:19.734091 | orchestrator | Thursday 01 January 2026 00:33:13 +0000 (0:00:01.254) 0:08:14.804 ****** 2026-01-01 00:33:19.734102 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.734113 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.734124 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.734135 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.734146 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.734157 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.734168 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.734179 | orchestrator | 2026-01-01 00:33:19.734189 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-01 00:33:19.734200 | orchestrator | 2026-01-01 00:33:19.734211 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-01 00:33:19.734222 | orchestrator | Thursday 01 January 2026 00:33:14 +0000 (0:00:01.217) 0:08:16.022 ****** 2026-01-01 00:33:19.734233 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:33:19.734245 | orchestrator | 2026-01-01 00:33:19.734256 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-01 00:33:19.734267 | orchestrator | Thursday 01 January 2026 00:33:15 +0000 (0:00:00.934) 0:08:16.957 ****** 2026-01-01 00:33:19.734278 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:19.734288 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:19.734299 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:19.734319 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:19.734330 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:19.734341 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:19.734352 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:19.734363 | orchestrator | 2026-01-01 00:33:19.734374 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-01 00:33:19.734385 | orchestrator | Thursday 01 January 2026 00:33:16 +0000 (0:00:01.198) 0:08:18.155 ****** 2026-01-01 00:33:19.734402 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:19.734413 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:19.734463 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:19.734474 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:19.734485 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:19.734496 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:19.734507 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:19.734517 | orchestrator | 2026-01-01 00:33:19.734528 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-01 00:33:19.734539 | orchestrator | Thursday 01 January 2026 00:33:17 +0000 (0:00:01.197) 0:08:19.352 ****** 2026-01-01 00:33:19.734550 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:33:19.734561 | orchestrator | 2026-01-01 00:33:19.734572 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-01 00:33:19.734583 | orchestrator | Thursday 01 January 2026 00:33:18 +0000 (0:00:01.041) 0:08:20.394 ****** 2026-01-01 00:33:19.734594 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:19.734605 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:19.734616 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:19.734626 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:19.734637 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:19.734648 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:19.734658 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:19.734669 | orchestrator | 2026-01-01 00:33:19.734690 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-01 00:33:21.460137 | orchestrator | Thursday 01 January 2026 00:33:19 +0000 (0:00:00.875) 0:08:21.269 ****** 2026-01-01 00:33:21.460268 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:21.460283 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:21.460295 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:21.460306 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:21.460317 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:21.460328 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:21.460339 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:21.460350 | orchestrator | 2026-01-01 00:33:21.460362 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:33:21.460373 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-01 00:33:21.460386 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-01 00:33:21.460397 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-01 00:33:21.460408 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-01 00:33:21.460483 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-01 00:33:21.460495 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-01 00:33:21.460541 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-01 00:33:21.460553 | orchestrator | 2026-01-01 00:33:21.460564 | orchestrator | 2026-01-01 00:33:21.460575 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:33:21.460586 | orchestrator | Thursday 01 January 2026 00:33:20 +0000 (0:00:01.158) 0:08:22.428 ****** 2026-01-01 00:33:21.460597 | orchestrator | =============================================================================== 2026-01-01 00:33:21.460608 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.98s 2026-01-01 00:33:21.460619 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.96s 2026-01-01 00:33:21.460630 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.06s 2026-01-01 00:33:21.460640 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.54s 2026-01-01 00:33:21.460651 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.60s 2026-01-01 00:33:21.460665 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.91s 2026-01-01 00:33:21.460678 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.90s 2026-01-01 00:33:21.460691 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.94s 2026-01-01 00:33:21.460703 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.92s 2026-01-01 00:33:21.460716 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.54s 2026-01-01 00:33:21.460729 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.22s 2026-01-01 00:33:21.460742 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.14s 2026-01-01 00:33:21.460756 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.92s 2026-01-01 00:33:21.460769 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.45s 2026-01-01 00:33:21.460782 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.34s 2026-01-01 00:33:21.460812 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.18s 2026-01-01 00:33:21.460826 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.97s 2026-01-01 00:33:21.460839 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.57s 2026-01-01 00:33:21.460852 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.95s 2026-01-01 00:33:21.460864 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.86s 2026-01-01 00:33:21.813224 | orchestrator | + osism apply fail2ban 2026-01-01 00:33:34.868948 | orchestrator | 2026-01-01 00:33:34 | INFO  | Task 46f45e54-3f31-459e-b26e-e409d11d104c (fail2ban) was prepared for execution. 2026-01-01 00:33:34.869057 | orchestrator | 2026-01-01 00:33:34 | INFO  | It takes a moment until task 46f45e54-3f31-459e-b26e-e409d11d104c (fail2ban) has been started and output is visible here. 2026-01-01 00:33:58.509436 | orchestrator | 2026-01-01 00:33:58.509593 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-01 00:33:58.509610 | orchestrator | 2026-01-01 00:33:58.509622 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-01 00:33:58.509649 | orchestrator | Thursday 01 January 2026 00:33:39 +0000 (0:00:00.276) 0:00:00.276 ****** 2026-01-01 00:33:58.509661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:33:58.509674 | orchestrator | 2026-01-01 00:33:58.509685 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-01 00:33:58.509695 | orchestrator | Thursday 01 January 2026 00:33:41 +0000 (0:00:01.266) 0:00:01.543 ****** 2026-01-01 00:33:58.509733 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:58.509745 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:58.509755 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:58.509765 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:58.509774 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:58.509784 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:58.509793 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:58.509803 | orchestrator | 2026-01-01 00:33:58.509813 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-01 00:33:58.509823 | orchestrator | Thursday 01 January 2026 00:33:53 +0000 (0:00:12.160) 0:00:13.703 ****** 2026-01-01 00:33:58.509833 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:58.509842 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:58.509852 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:58.509862 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:58.509871 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:58.509881 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:58.509890 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:58.509900 | orchestrator | 2026-01-01 00:33:58.509910 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-01 00:33:58.509920 | orchestrator | Thursday 01 January 2026 00:33:54 +0000 (0:00:01.524) 0:00:15.227 ****** 2026-01-01 00:33:58.509929 | orchestrator | ok: [testbed-manager] 2026-01-01 00:33:58.509942 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:33:58.509954 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:33:58.509965 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:33:58.509976 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:33:58.509987 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:33:58.509998 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:33:58.510009 | orchestrator | 2026-01-01 00:33:58.510074 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-01 00:33:58.510087 | orchestrator | Thursday 01 January 2026 00:33:56 +0000 (0:00:01.527) 0:00:16.755 ****** 2026-01-01 00:33:58.510098 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:33:58.510110 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:33:58.510120 | orchestrator | changed: [testbed-manager] 2026-01-01 00:33:58.510131 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:33:58.510142 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:33:58.510154 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:33:58.510165 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:33:58.510177 | orchestrator | 2026-01-01 00:33:58.510189 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:33:58.510200 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510213 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510225 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510237 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510248 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510259 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510271 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:33:58.510282 | orchestrator | 2026-01-01 00:33:58.510293 | orchestrator | 2026-01-01 00:33:58.510303 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:33:58.510344 | orchestrator | Thursday 01 January 2026 00:33:58 +0000 (0:00:01.709) 0:00:18.464 ****** 2026-01-01 00:33:58.510355 | orchestrator | =============================================================================== 2026-01-01 00:33:58.510365 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.16s 2026-01-01 00:33:58.510375 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.71s 2026-01-01 00:33:58.510404 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.53s 2026-01-01 00:33:58.510414 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.52s 2026-01-01 00:33:58.510424 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.27s 2026-01-01 00:33:58.859069 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-01 00:33:58.859189 | orchestrator | + osism apply network 2026-01-01 00:34:11.040323 | orchestrator | 2026-01-01 00:34:11 | INFO  | Task 694c7787-1cce-4641-853d-bfad1b3a673d (network) was prepared for execution. 2026-01-01 00:34:11.040514 | orchestrator | 2026-01-01 00:34:11 | INFO  | It takes a moment until task 694c7787-1cce-4641-853d-bfad1b3a673d (network) has been started and output is visible here. 2026-01-01 00:34:41.735529 | orchestrator | 2026-01-01 00:34:41.735674 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-01 00:34:41.735692 | orchestrator | 2026-01-01 00:34:41.735705 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-01 00:34:41.735717 | orchestrator | Thursday 01 January 2026 00:34:15 +0000 (0:00:00.298) 0:00:00.298 ****** 2026-01-01 00:34:41.735728 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.735741 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.735752 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.735764 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.735776 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.735787 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.735798 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.735808 | orchestrator | 2026-01-01 00:34:41.735820 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-01 00:34:41.735831 | orchestrator | Thursday 01 January 2026 00:34:16 +0000 (0:00:00.762) 0:00:01.060 ****** 2026-01-01 00:34:41.735844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:34:41.735858 | orchestrator | 2026-01-01 00:34:41.735869 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-01 00:34:41.735880 | orchestrator | Thursday 01 January 2026 00:34:17 +0000 (0:00:01.301) 0:00:02.362 ****** 2026-01-01 00:34:41.735891 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.735902 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.735913 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.735924 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.735935 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.735945 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.735956 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.735967 | orchestrator | 2026-01-01 00:34:41.735978 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-01 00:34:41.735989 | orchestrator | Thursday 01 January 2026 00:34:19 +0000 (0:00:02.204) 0:00:04.567 ****** 2026-01-01 00:34:41.736000 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.736011 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.736021 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.736032 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.736043 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.736054 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.736065 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.736075 | orchestrator | 2026-01-01 00:34:41.736086 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-01 00:34:41.736128 | orchestrator | Thursday 01 January 2026 00:34:21 +0000 (0:00:01.903) 0:00:06.470 ****** 2026-01-01 00:34:41.736140 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-01 00:34:41.736152 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-01 00:34:41.736163 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-01 00:34:41.736174 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-01 00:34:41.736185 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-01 00:34:41.736196 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-01 00:34:41.736207 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-01 00:34:41.736218 | orchestrator | 2026-01-01 00:34:41.736229 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-01 00:34:41.736240 | orchestrator | Thursday 01 January 2026 00:34:22 +0000 (0:00:01.074) 0:00:07.545 ****** 2026-01-01 00:34:41.736252 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:34:41.736264 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 00:34:41.736274 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:34:41.736285 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-01 00:34:41.736296 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 00:34:41.736307 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-01 00:34:41.736318 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-01 00:34:41.736328 | orchestrator | 2026-01-01 00:34:41.736339 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-01 00:34:41.736350 | orchestrator | Thursday 01 January 2026 00:34:26 +0000 (0:00:03.585) 0:00:11.130 ****** 2026-01-01 00:34:41.736361 | orchestrator | changed: [testbed-manager] 2026-01-01 00:34:41.736372 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:41.736383 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:41.736394 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:41.736426 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:41.736437 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:41.736448 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:41.736459 | orchestrator | 2026-01-01 00:34:41.736487 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-01 00:34:41.736499 | orchestrator | Thursday 01 January 2026 00:34:28 +0000 (0:00:01.714) 0:00:12.845 ****** 2026-01-01 00:34:41.736510 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:34:41.736520 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:34:41.736531 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 00:34:41.736542 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 00:34:41.736552 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-01 00:34:41.736563 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-01 00:34:41.736574 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-01 00:34:41.736585 | orchestrator | 2026-01-01 00:34:41.736595 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-01 00:34:41.736606 | orchestrator | Thursday 01 January 2026 00:34:30 +0000 (0:00:01.825) 0:00:14.670 ****** 2026-01-01 00:34:41.736617 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.736628 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.736639 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.736649 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.736660 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.736671 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.736681 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.736692 | orchestrator | 2026-01-01 00:34:41.736703 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-01 00:34:41.736732 | orchestrator | Thursday 01 January 2026 00:34:31 +0000 (0:00:01.212) 0:00:15.883 ****** 2026-01-01 00:34:41.736744 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:41.736755 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:41.736766 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:41.736790 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:41.736808 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:41.736827 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:41.736846 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:41.736864 | orchestrator | 2026-01-01 00:34:41.736881 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-01 00:34:41.736900 | orchestrator | Thursday 01 January 2026 00:34:31 +0000 (0:00:00.684) 0:00:16.568 ****** 2026-01-01 00:34:41.736918 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.736937 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.736955 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.736973 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.736990 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.737001 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.737012 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.737022 | orchestrator | 2026-01-01 00:34:41.737033 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-01 00:34:41.737044 | orchestrator | Thursday 01 January 2026 00:34:34 +0000 (0:00:02.412) 0:00:18.980 ****** 2026-01-01 00:34:41.737055 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:41.737066 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:41.737076 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:41.737087 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:41.737097 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:41.737108 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:41.737120 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-01 00:34:41.737132 | orchestrator | 2026-01-01 00:34:41.737143 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-01 00:34:41.737154 | orchestrator | Thursday 01 January 2026 00:34:35 +0000 (0:00:00.969) 0:00:19.950 ****** 2026-01-01 00:34:41.737165 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.737176 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:34:41.737187 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:34:41.737197 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:34:41.737208 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:34:41.737218 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:34:41.737229 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:34:41.737239 | orchestrator | 2026-01-01 00:34:41.737250 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-01 00:34:41.737261 | orchestrator | Thursday 01 January 2026 00:34:37 +0000 (0:00:01.749) 0:00:21.700 ****** 2026-01-01 00:34:41.737272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:34:41.737286 | orchestrator | 2026-01-01 00:34:41.737296 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-01 00:34:41.737307 | orchestrator | Thursday 01 January 2026 00:34:38 +0000 (0:00:01.310) 0:00:23.010 ****** 2026-01-01 00:34:41.737318 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.737328 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.737339 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.737350 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.737360 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.737371 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.737381 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.737392 | orchestrator | 2026-01-01 00:34:41.737423 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-01 00:34:41.737435 | orchestrator | Thursday 01 January 2026 00:34:39 +0000 (0:00:01.234) 0:00:24.245 ****** 2026-01-01 00:34:41.737445 | orchestrator | ok: [testbed-manager] 2026-01-01 00:34:41.737456 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:34:41.737467 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:34:41.737486 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:34:41.737497 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:34:41.737508 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:34:41.737518 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:34:41.737529 | orchestrator | 2026-01-01 00:34:41.737540 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-01 00:34:41.737551 | orchestrator | Thursday 01 January 2026 00:34:40 +0000 (0:00:00.694) 0:00:24.939 ****** 2026-01-01 00:34:41.737562 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737573 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737584 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737595 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737606 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737617 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737628 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737639 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737649 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737660 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737671 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737682 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-01 00:34:41.737692 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737703 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-01 00:34:41.737714 | orchestrator | 2026-01-01 00:34:41.737734 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-01 00:34:59.948793 | orchestrator | Thursday 01 January 2026 00:34:41 +0000 (0:00:01.379) 0:00:26.319 ****** 2026-01-01 00:34:59.948928 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:34:59.948944 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:34:59.948956 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:34:59.948968 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:34:59.948980 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:34:59.948990 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:34:59.949003 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:34:59.949014 | orchestrator | 2026-01-01 00:34:59.949027 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-01 00:34:59.949038 | orchestrator | Thursday 01 January 2026 00:34:42 +0000 (0:00:00.664) 0:00:26.983 ****** 2026-01-01 00:34:59.949075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-01-01 00:34:59.949091 | orchestrator | 2026-01-01 00:34:59.949102 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-01 00:34:59.949114 | orchestrator | Thursday 01 January 2026 00:34:47 +0000 (0:00:05.020) 0:00:32.004 ****** 2026-01-01 00:34:59.949127 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949206 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949365 | orchestrator | 2026-01-01 00:34:59.949378 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-01 00:34:59.949392 | orchestrator | Thursday 01 January 2026 00:34:53 +0000 (0:00:06.387) 0:00:38.391 ****** 2026-01-01 00:34:59.949405 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949530 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-01 00:34:59.949555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:34:59.949627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:35:14.662498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-01 00:35:14.662657 | orchestrator | 2026-01-01 00:35:14.662686 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-01 00:35:14.662700 | orchestrator | Thursday 01 January 2026 00:34:59 +0000 (0:00:06.134) 0:00:44.526 ****** 2026-01-01 00:35:14.662744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:35:14.662757 | orchestrator | 2026-01-01 00:35:14.662768 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-01 00:35:14.662780 | orchestrator | Thursday 01 January 2026 00:35:01 +0000 (0:00:01.355) 0:00:45.881 ****** 2026-01-01 00:35:14.662791 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:14.662803 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:14.662814 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:14.662825 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:14.662836 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:14.662847 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:14.662858 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:14.662869 | orchestrator | 2026-01-01 00:35:14.662880 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-01 00:35:14.662892 | orchestrator | Thursday 01 January 2026 00:35:02 +0000 (0:00:01.216) 0:00:47.098 ****** 2026-01-01 00:35:14.662903 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.662915 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.662926 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.662937 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.662948 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.662961 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.662974 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.662986 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.662998 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.663012 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.663025 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.663038 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.663050 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.663063 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.663076 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.663088 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.663100 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.663112 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.663125 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.663138 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.663150 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.663182 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.663195 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.663207 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.663220 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.663232 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.663254 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.663265 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.663276 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.663287 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.663301 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-01 00:35:14.663320 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-01 00:35:14.663339 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-01 00:35:14.663357 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-01 00:35:14.663375 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.663392 | orchestrator | 2026-01-01 00:35:14.663409 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-01 00:35:14.663526 | orchestrator | Thursday 01 January 2026 00:35:03 +0000 (0:00:00.985) 0:00:48.083 ****** 2026-01-01 00:35:14.663552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:35:14.663568 | orchestrator | 2026-01-01 00:35:14.663579 | orchestrator | TASK [osism.commons.network : Install required packages for network-extra-init] *** 2026-01-01 00:35:14.663590 | orchestrator | Thursday 01 January 2026 00:35:04 +0000 (0:00:01.364) 0:00:49.448 ****** 2026-01-01 00:35:14.663601 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.663612 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.663623 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.663634 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.663644 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.663655 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.663665 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.663676 | orchestrator | 2026-01-01 00:35:14.663687 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-01 00:35:14.663698 | orchestrator | Thursday 01 January 2026 00:35:05 +0000 (0:00:00.670) 0:00:50.118 ****** 2026-01-01 00:35:14.663709 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.663720 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.663730 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.663741 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.663751 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.663762 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.663772 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.663783 | orchestrator | 2026-01-01 00:35:14.663794 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-01 00:35:14.663805 | orchestrator | Thursday 01 January 2026 00:35:06 +0000 (0:00:00.853) 0:00:50.972 ****** 2026-01-01 00:35:14.663815 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.663826 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.663837 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.663847 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.663858 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.663868 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.663879 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.663890 | orchestrator | 2026-01-01 00:35:14.663901 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-01 00:35:14.663911 | orchestrator | Thursday 01 January 2026 00:35:07 +0000 (0:00:00.626) 0:00:51.599 ****** 2026-01-01 00:35:14.663922 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.663933 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.663944 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.663955 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.663976 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.663987 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.663998 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.664009 | orchestrator | 2026-01-01 00:35:14.664020 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-01 00:35:14.664031 | orchestrator | Thursday 01 January 2026 00:35:07 +0000 (0:00:00.844) 0:00:52.443 ****** 2026-01-01 00:35:14.664042 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:14.664053 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:14.664063 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:14.664074 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:14.664085 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:14.664096 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:14.664106 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:14.664117 | orchestrator | 2026-01-01 00:35:14.664128 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-01 00:35:14.664139 | orchestrator | Thursday 01 January 2026 00:35:09 +0000 (0:00:01.572) 0:00:54.015 ****** 2026-01-01 00:35:14.664150 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:14.664160 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:14.664171 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:14.664182 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:14.664193 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:14.664203 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:14.664214 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:14.664225 | orchestrator | 2026-01-01 00:35:14.664236 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-01 00:35:14.664247 | orchestrator | Thursday 01 January 2026 00:35:10 +0000 (0:00:01.337) 0:00:55.353 ****** 2026-01-01 00:35:14.664257 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:14.664276 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:35:14.664287 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:35:14.664298 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:35:14.664308 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:35:14.664319 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:35:14.664330 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:35:14.664340 | orchestrator | 2026-01-01 00:35:14.664351 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-01 00:35:14.664362 | orchestrator | Thursday 01 January 2026 00:35:13 +0000 (0:00:02.376) 0:00:57.730 ****** 2026-01-01 00:35:14.664373 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.664385 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.664395 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.664406 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.664417 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.664428 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.664488 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.664507 | orchestrator | 2026-01-01 00:35:14.664526 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-01 00:35:14.664539 | orchestrator | Thursday 01 January 2026 00:35:13 +0000 (0:00:00.725) 0:00:58.456 ****** 2026-01-01 00:35:14.664550 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:35:14.664561 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:35:14.664571 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:35:14.664588 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:35:14.664608 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:35:14.664627 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:35:14.664645 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:35:14.664659 | orchestrator | 2026-01-01 00:35:14.664671 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:35:15.157821 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-01 00:35:15.157944 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:35:15.157992 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:35:15.158012 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:35:15.158105 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:35:15.158120 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:35:15.158136 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-01 00:35:15.158151 | orchestrator | 2026-01-01 00:35:15.158169 | orchestrator | 2026-01-01 00:35:15.158186 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:35:15.158203 | orchestrator | Thursday 01 January 2026 00:35:14 +0000 (0:00:00.776) 0:00:59.232 ****** 2026-01-01 00:35:15.158212 | orchestrator | =============================================================================== 2026-01-01 00:35:15.158221 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.39s 2026-01-01 00:35:15.158230 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.13s 2026-01-01 00:35:15.158239 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.02s 2026-01-01 00:35:15.158248 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.59s 2026-01-01 00:35:15.158257 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.41s 2026-01-01 00:35:15.158265 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.38s 2026-01-01 00:35:15.158274 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.20s 2026-01-01 00:35:15.158282 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.90s 2026-01-01 00:35:15.158291 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.83s 2026-01-01 00:35:15.158300 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.75s 2026-01-01 00:35:15.158308 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2026-01-01 00:35:15.158317 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.57s 2026-01-01 00:35:15.158325 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.38s 2026-01-01 00:35:15.158334 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.36s 2026-01-01 00:35:15.158343 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.36s 2026-01-01 00:35:15.158351 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.34s 2026-01-01 00:35:15.158360 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2026-01-01 00:35:15.158369 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.30s 2026-01-01 00:35:15.158377 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2026-01-01 00:35:15.158386 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2026-01-01 00:35:15.593795 | orchestrator | + osism apply wireguard 2026-01-01 00:35:27.840076 | orchestrator | 2026-01-01 00:35:27 | INFO  | Task 2f3f29ef-1987-42d1-a3f2-28684138ba3c (wireguard) was prepared for execution. 2026-01-01 00:35:27.840221 | orchestrator | 2026-01-01 00:35:27 | INFO  | It takes a moment until task 2f3f29ef-1987-42d1-a3f2-28684138ba3c (wireguard) has been started and output is visible here. 2026-01-01 00:35:50.210449 | orchestrator | 2026-01-01 00:35:50.210654 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-01 00:35:50.210672 | orchestrator | 2026-01-01 00:35:50.210682 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-01 00:35:50.210692 | orchestrator | Thursday 01 January 2026 00:35:32 +0000 (0:00:00.365) 0:00:00.365 ****** 2026-01-01 00:35:50.210702 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:50.210713 | orchestrator | 2026-01-01 00:35:50.210723 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-01 00:35:50.210733 | orchestrator | Thursday 01 January 2026 00:35:34 +0000 (0:00:02.006) 0:00:02.371 ****** 2026-01-01 00:35:50.210743 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.210753 | orchestrator | 2026-01-01 00:35:50.210763 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-01 00:35:50.210773 | orchestrator | Thursday 01 January 2026 00:35:41 +0000 (0:00:07.459) 0:00:09.830 ****** 2026-01-01 00:35:50.210782 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.210792 | orchestrator | 2026-01-01 00:35:50.210801 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-01 00:35:50.210811 | orchestrator | Thursday 01 January 2026 00:35:42 +0000 (0:00:00.606) 0:00:10.436 ****** 2026-01-01 00:35:50.210820 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.210829 | orchestrator | 2026-01-01 00:35:50.210839 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-01 00:35:50.210849 | orchestrator | Thursday 01 January 2026 00:35:43 +0000 (0:00:00.445) 0:00:10.881 ****** 2026-01-01 00:35:50.210858 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:50.210868 | orchestrator | 2026-01-01 00:35:50.210877 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-01 00:35:50.210887 | orchestrator | Thursday 01 January 2026 00:35:43 +0000 (0:00:00.735) 0:00:11.617 ****** 2026-01-01 00:35:50.210896 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:50.210905 | orchestrator | 2026-01-01 00:35:50.210915 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-01 00:35:50.210924 | orchestrator | Thursday 01 January 2026 00:35:44 +0000 (0:00:00.445) 0:00:12.062 ****** 2026-01-01 00:35:50.210934 | orchestrator | ok: [testbed-manager] 2026-01-01 00:35:50.210944 | orchestrator | 2026-01-01 00:35:50.210954 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-01 00:35:50.210964 | orchestrator | Thursday 01 January 2026 00:35:44 +0000 (0:00:00.449) 0:00:12.511 ****** 2026-01-01 00:35:50.210974 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.210985 | orchestrator | 2026-01-01 00:35:50.210996 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-01 00:35:50.211007 | orchestrator | Thursday 01 January 2026 00:35:45 +0000 (0:00:01.305) 0:00:13.817 ****** 2026-01-01 00:35:50.211019 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-01 00:35:50.211030 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.211042 | orchestrator | 2026-01-01 00:35:50.211054 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-01 00:35:50.211064 | orchestrator | Thursday 01 January 2026 00:35:46 +0000 (0:00:01.038) 0:00:14.856 ****** 2026-01-01 00:35:50.211075 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.211087 | orchestrator | 2026-01-01 00:35:50.211098 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-01 00:35:50.211109 | orchestrator | Thursday 01 January 2026 00:35:48 +0000 (0:00:01.780) 0:00:16.636 ****** 2026-01-01 00:35:50.211120 | orchestrator | changed: [testbed-manager] 2026-01-01 00:35:50.211131 | orchestrator | 2026-01-01 00:35:50.211142 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:35:50.211153 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:35:50.211165 | orchestrator | 2026-01-01 00:35:50.211176 | orchestrator | 2026-01-01 00:35:50.211187 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:35:50.211206 | orchestrator | Thursday 01 January 2026 00:35:49 +0000 (0:00:01.035) 0:00:17.671 ****** 2026-01-01 00:35:50.211217 | orchestrator | =============================================================================== 2026-01-01 00:35:50.211228 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.46s 2026-01-01 00:35:50.211239 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 2.01s 2026-01-01 00:35:50.211250 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.78s 2026-01-01 00:35:50.211261 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.31s 2026-01-01 00:35:50.211273 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.04s 2026-01-01 00:35:50.211284 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.04s 2026-01-01 00:35:50.211294 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.74s 2026-01-01 00:35:50.211305 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2026-01-01 00:35:50.211317 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-01-01 00:35:50.211328 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-01-01 00:35:50.211340 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-01-01 00:35:50.576799 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-01 00:35:50.611765 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-01 00:35:50.611869 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-01 00:35:50.687932 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2026-01-01 00:35:50.700922 | orchestrator | + osism apply --environment custom workarounds 2026-01-01 00:35:52.751330 | orchestrator | 2026-01-01 00:35:52 | INFO  | Trying to run play workarounds in environment custom 2026-01-01 00:36:02.937849 | orchestrator | 2026-01-01 00:36:02 | INFO  | Task 16ecc4ec-b4fd-4d28-bd21-dddd8109c615 (workarounds) was prepared for execution. 2026-01-01 00:36:02.937985 | orchestrator | 2026-01-01 00:36:02 | INFO  | It takes a moment until task 16ecc4ec-b4fd-4d28-bd21-dddd8109c615 (workarounds) has been started and output is visible here. 2026-01-01 00:36:29.298898 | orchestrator | 2026-01-01 00:36:29.299037 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:36:29.299065 | orchestrator | 2026-01-01 00:36:29.299079 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-01 00:36:29.299096 | orchestrator | Thursday 01 January 2026 00:36:07 +0000 (0:00:00.131) 0:00:00.131 ****** 2026-01-01 00:36:29.299116 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299136 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299155 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299176 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299195 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299215 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299234 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-01 00:36:29.299246 | orchestrator | 2026-01-01 00:36:29.299258 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-01 00:36:29.299269 | orchestrator | 2026-01-01 00:36:29.299280 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-01 00:36:29.299291 | orchestrator | Thursday 01 January 2026 00:36:08 +0000 (0:00:00.838) 0:00:00.970 ****** 2026-01-01 00:36:29.299303 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:29.299345 | orchestrator | 2026-01-01 00:36:29.299357 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-01 00:36:29.299368 | orchestrator | 2026-01-01 00:36:29.299379 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-01 00:36:29.299390 | orchestrator | Thursday 01 January 2026 00:36:10 +0000 (0:00:02.588) 0:00:03.558 ****** 2026-01-01 00:36:29.299401 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:29.299412 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:29.299423 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:29.299436 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:29.299448 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:29.299460 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:29.299472 | orchestrator | 2026-01-01 00:36:29.299485 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-01 00:36:29.299498 | orchestrator | 2026-01-01 00:36:29.299542 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-01 00:36:29.299562 | orchestrator | Thursday 01 January 2026 00:36:12 +0000 (0:00:01.828) 0:00:05.386 ****** 2026-01-01 00:36:29.299577 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:36:29.299591 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:36:29.299604 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:36:29.299617 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:36:29.299627 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:36:29.299638 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-01 00:36:29.299650 | orchestrator | 2026-01-01 00:36:29.299661 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-01 00:36:29.299672 | orchestrator | Thursday 01 January 2026 00:36:14 +0000 (0:00:01.567) 0:00:06.954 ****** 2026-01-01 00:36:29.299683 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:29.299694 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:29.299705 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:29.299716 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:29.299727 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:29.299737 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:29.299748 | orchestrator | 2026-01-01 00:36:29.299759 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-01 00:36:29.299770 | orchestrator | Thursday 01 January 2026 00:36:18 +0000 (0:00:03.910) 0:00:10.864 ****** 2026-01-01 00:36:29.299781 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:29.299792 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:29.299803 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:29.299813 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:29.299824 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:29.299835 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:29.299846 | orchestrator | 2026-01-01 00:36:29.299857 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-01 00:36:29.299867 | orchestrator | 2026-01-01 00:36:29.299879 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-01 00:36:29.299889 | orchestrator | Thursday 01 January 2026 00:36:18 +0000 (0:00:00.772) 0:00:11.637 ****** 2026-01-01 00:36:29.299900 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:29.299911 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:29.299922 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:29.299933 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:29.299944 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:29.299954 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:29.299975 | orchestrator | changed: [testbed-manager] 2026-01-01 00:36:29.299986 | orchestrator | 2026-01-01 00:36:29.300016 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-01 00:36:29.300028 | orchestrator | Thursday 01 January 2026 00:36:20 +0000 (0:00:01.624) 0:00:13.261 ****** 2026-01-01 00:36:29.300039 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:29.300050 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:29.300061 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:29.300072 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:29.300083 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:29.300093 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:29.300125 | orchestrator | changed: [testbed-manager] 2026-01-01 00:36:29.300136 | orchestrator | 2026-01-01 00:36:29.300147 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-01 00:36:29.300158 | orchestrator | Thursday 01 January 2026 00:36:22 +0000 (0:00:01.640) 0:00:14.902 ****** 2026-01-01 00:36:29.300169 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:29.300180 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:29.300191 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:29.300202 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:29.300213 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:29.300224 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:29.300235 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:29.300246 | orchestrator | 2026-01-01 00:36:29.300257 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-01 00:36:29.300268 | orchestrator | Thursday 01 January 2026 00:36:23 +0000 (0:00:01.646) 0:00:16.548 ****** 2026-01-01 00:36:29.300279 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:29.300290 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:29.300301 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:29.300312 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:29.300323 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:29.300333 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:29.300344 | orchestrator | changed: [testbed-manager] 2026-01-01 00:36:29.300355 | orchestrator | 2026-01-01 00:36:29.300366 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-01 00:36:29.300377 | orchestrator | Thursday 01 January 2026 00:36:25 +0000 (0:00:01.937) 0:00:18.486 ****** 2026-01-01 00:36:29.300388 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:29.300399 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:29.300410 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:29.300420 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:29.300431 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:29.300442 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:29.300453 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:36:29.300464 | orchestrator | 2026-01-01 00:36:29.300475 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-01 00:36:29.300486 | orchestrator | 2026-01-01 00:36:29.300497 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-01 00:36:29.300530 | orchestrator | Thursday 01 January 2026 00:36:26 +0000 (0:00:00.645) 0:00:19.131 ****** 2026-01-01 00:36:29.300543 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:36:29.300555 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:36:29.300565 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:36:29.300576 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:36:29.300587 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:36:29.300598 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:36:29.300608 | orchestrator | ok: [testbed-manager] 2026-01-01 00:36:29.300619 | orchestrator | 2026-01-01 00:36:29.300630 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:36:29.300643 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:36:29.300656 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:29.300674 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:29.300685 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:29.300697 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:29.300707 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:29.300718 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:29.300729 | orchestrator | 2026-01-01 00:36:29.300740 | orchestrator | 2026-01-01 00:36:29.300751 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:36:29.300762 | orchestrator | Thursday 01 January 2026 00:36:29 +0000 (0:00:02.983) 0:00:22.115 ****** 2026-01-01 00:36:29.300773 | orchestrator | =============================================================================== 2026-01-01 00:36:29.300784 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.91s 2026-01-01 00:36:29.300800 | orchestrator | Install python3-docker -------------------------------------------------- 2.98s 2026-01-01 00:36:29.300811 | orchestrator | Apply netplan configuration --------------------------------------------- 2.59s 2026-01-01 00:36:29.300822 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.94s 2026-01-01 00:36:29.300833 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-01-01 00:36:29.300844 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.65s 2026-01-01 00:36:29.300855 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.64s 2026-01-01 00:36:29.300866 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.62s 2026-01-01 00:36:29.300877 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2026-01-01 00:36:29.300888 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.84s 2026-01-01 00:36:29.300898 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2026-01-01 00:36:29.300926 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-01-01 00:36:30.077682 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-01 00:36:42.493961 | orchestrator | 2026-01-01 00:36:42 | INFO  | Task 803de236-b109-4e26-85f5-7f4072ff81d7 (reboot) was prepared for execution. 2026-01-01 00:36:42.494122 | orchestrator | 2026-01-01 00:36:42 | INFO  | It takes a moment until task 803de236-b109-4e26-85f5-7f4072ff81d7 (reboot) has been started and output is visible here. 2026-01-01 00:36:53.167982 | orchestrator | 2026-01-01 00:36:53.168122 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:36:53.168139 | orchestrator | 2026-01-01 00:36:53.168152 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:36:53.168164 | orchestrator | Thursday 01 January 2026 00:36:46 +0000 (0:00:00.204) 0:00:00.204 ****** 2026-01-01 00:36:53.168176 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:53.168189 | orchestrator | 2026-01-01 00:36:53.168201 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:36:53.168212 | orchestrator | Thursday 01 January 2026 00:36:46 +0000 (0:00:00.114) 0:00:00.318 ****** 2026-01-01 00:36:53.168224 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:36:53.168236 | orchestrator | 2026-01-01 00:36:53.168248 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:36:53.168290 | orchestrator | Thursday 01 January 2026 00:36:47 +0000 (0:00:00.988) 0:00:01.306 ****** 2026-01-01 00:36:53.168302 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:36:53.168314 | orchestrator | 2026-01-01 00:36:53.168325 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:36:53.168336 | orchestrator | 2026-01-01 00:36:53.168348 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:36:53.168360 | orchestrator | Thursday 01 January 2026 00:36:48 +0000 (0:00:00.102) 0:00:01.409 ****** 2026-01-01 00:36:53.168371 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:53.168382 | orchestrator | 2026-01-01 00:36:53.168394 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:36:53.168405 | orchestrator | Thursday 01 January 2026 00:36:48 +0000 (0:00:00.116) 0:00:01.526 ****** 2026-01-01 00:36:53.168417 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:36:53.168428 | orchestrator | 2026-01-01 00:36:53.168440 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:36:53.168452 | orchestrator | Thursday 01 January 2026 00:36:48 +0000 (0:00:00.691) 0:00:02.217 ****** 2026-01-01 00:36:53.168463 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:36:53.168474 | orchestrator | 2026-01-01 00:36:53.168486 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:36:53.168497 | orchestrator | 2026-01-01 00:36:53.168511 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:36:53.168525 | orchestrator | Thursday 01 January 2026 00:36:48 +0000 (0:00:00.112) 0:00:02.330 ****** 2026-01-01 00:36:53.168563 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:53.168577 | orchestrator | 2026-01-01 00:36:53.168590 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:36:53.168603 | orchestrator | Thursday 01 January 2026 00:36:49 +0000 (0:00:00.216) 0:00:02.546 ****** 2026-01-01 00:36:53.168616 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:36:53.168628 | orchestrator | 2026-01-01 00:36:53.168642 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:36:53.168655 | orchestrator | Thursday 01 January 2026 00:36:49 +0000 (0:00:00.700) 0:00:03.247 ****** 2026-01-01 00:36:53.168668 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:36:53.168681 | orchestrator | 2026-01-01 00:36:53.168694 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:36:53.168707 | orchestrator | 2026-01-01 00:36:53.168720 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:36:53.168733 | orchestrator | Thursday 01 January 2026 00:36:50 +0000 (0:00:00.124) 0:00:03.372 ****** 2026-01-01 00:36:53.168746 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:53.168759 | orchestrator | 2026-01-01 00:36:53.168772 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:36:53.168785 | orchestrator | Thursday 01 January 2026 00:36:50 +0000 (0:00:00.097) 0:00:03.469 ****** 2026-01-01 00:36:53.168797 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:36:53.168811 | orchestrator | 2026-01-01 00:36:53.168824 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:36:53.168837 | orchestrator | Thursday 01 January 2026 00:36:50 +0000 (0:00:00.678) 0:00:04.147 ****** 2026-01-01 00:36:53.168850 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:36:53.168863 | orchestrator | 2026-01-01 00:36:53.168875 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:36:53.168886 | orchestrator | 2026-01-01 00:36:53.168917 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:36:53.168929 | orchestrator | Thursday 01 January 2026 00:36:50 +0000 (0:00:00.136) 0:00:04.284 ****** 2026-01-01 00:36:53.168941 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:53.168952 | orchestrator | 2026-01-01 00:36:53.168964 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:36:53.168985 | orchestrator | Thursday 01 January 2026 00:36:51 +0000 (0:00:00.104) 0:00:04.388 ****** 2026-01-01 00:36:53.168997 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:36:53.169008 | orchestrator | 2026-01-01 00:36:53.169020 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:36:53.169031 | orchestrator | Thursday 01 January 2026 00:36:51 +0000 (0:00:00.716) 0:00:05.105 ****** 2026-01-01 00:36:53.169043 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:36:53.169054 | orchestrator | 2026-01-01 00:36:53.169068 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-01 00:36:53.169086 | orchestrator | 2026-01-01 00:36:53.169105 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-01 00:36:53.169133 | orchestrator | Thursday 01 January 2026 00:36:51 +0000 (0:00:00.120) 0:00:05.225 ****** 2026-01-01 00:36:53.169153 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:53.169171 | orchestrator | 2026-01-01 00:36:53.169188 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-01 00:36:53.169205 | orchestrator | Thursday 01 January 2026 00:36:51 +0000 (0:00:00.101) 0:00:05.327 ****** 2026-01-01 00:36:53.169222 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:36:53.169239 | orchestrator | 2026-01-01 00:36:53.169258 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-01 00:36:53.169277 | orchestrator | Thursday 01 January 2026 00:36:52 +0000 (0:00:00.729) 0:00:06.057 ****** 2026-01-01 00:36:53.169319 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:36:53.169332 | orchestrator | 2026-01-01 00:36:53.169344 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:36:53.169356 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:53.169368 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:53.169380 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:53.169390 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:53.169401 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:53.169412 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:36:53.169422 | orchestrator | 2026-01-01 00:36:53.169433 | orchestrator | 2026-01-01 00:36:53.169444 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:36:53.169455 | orchestrator | Thursday 01 January 2026 00:36:52 +0000 (0:00:00.043) 0:00:06.100 ****** 2026-01-01 00:36:53.169466 | orchestrator | =============================================================================== 2026-01-01 00:36:53.169477 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.50s 2026-01-01 00:36:53.169488 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2026-01-01 00:36:53.169499 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-01-01 00:36:53.528619 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-01 00:37:05.786200 | orchestrator | 2026-01-01 00:37:05 | INFO  | Task 32d3e102-2cd4-4369-bfb7-37dac7fc30b4 (wait-for-connection) was prepared for execution. 2026-01-01 00:37:05.786341 | orchestrator | 2026-01-01 00:37:05 | INFO  | It takes a moment until task 32d3e102-2cd4-4369-bfb7-37dac7fc30b4 (wait-for-connection) has been started and output is visible here. 2026-01-01 00:37:22.251205 | orchestrator | 2026-01-01 00:37:22.251337 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-01 00:37:22.251359 | orchestrator | 2026-01-01 00:37:22.251373 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-01 00:37:22.251383 | orchestrator | Thursday 01 January 2026 00:37:10 +0000 (0:00:00.246) 0:00:00.246 ****** 2026-01-01 00:37:22.251391 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:37:22.251402 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:37:22.251410 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:37:22.251419 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:37:22.251427 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:37:22.251435 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:37:22.251443 | orchestrator | 2026-01-01 00:37:22.251452 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:37:22.251460 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:37:22.251470 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:37:22.251497 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:37:22.251505 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:37:22.251514 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:37:22.251525 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:37:22.251539 | orchestrator | 2026-01-01 00:37:22.251606 | orchestrator | 2026-01-01 00:37:22.251622 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:37:22.251636 | orchestrator | Thursday 01 January 2026 00:37:21 +0000 (0:00:11.608) 0:00:11.854 ****** 2026-01-01 00:37:22.251651 | orchestrator | =============================================================================== 2026-01-01 00:37:22.251664 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.61s 2026-01-01 00:37:22.598801 | orchestrator | + osism apply hddtemp 2026-01-01 00:37:34.753696 | orchestrator | 2026-01-01 00:37:34 | INFO  | Task 02325d14-b544-461a-b123-7fd4fa8996c0 (hddtemp) was prepared for execution. 2026-01-01 00:37:34.753820 | orchestrator | 2026-01-01 00:37:34 | INFO  | It takes a moment until task 02325d14-b544-461a-b123-7fd4fa8996c0 (hddtemp) has been started and output is visible here. 2026-01-01 00:38:05.849241 | orchestrator | 2026-01-01 00:38:05.849377 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-01 00:38:05.849404 | orchestrator | 2026-01-01 00:38:05.849423 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-01 00:38:05.849443 | orchestrator | Thursday 01 January 2026 00:37:39 +0000 (0:00:00.264) 0:00:00.264 ****** 2026-01-01 00:38:05.849462 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:05.849484 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:05.849502 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:05.849521 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:05.849532 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:05.849544 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:05.849556 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:05.849567 | orchestrator | 2026-01-01 00:38:05.849578 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-01 00:38:05.849648 | orchestrator | Thursday 01 January 2026 00:37:39 +0000 (0:00:00.717) 0:00:00.982 ****** 2026-01-01 00:38:05.849662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:38:05.849700 | orchestrator | 2026-01-01 00:38:05.849712 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-01 00:38:05.849723 | orchestrator | Thursday 01 January 2026 00:37:41 +0000 (0:00:01.294) 0:00:02.276 ****** 2026-01-01 00:38:05.849734 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:05.849745 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:05.849756 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:05.849770 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:05.849782 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:05.849795 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:05.849808 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:05.849821 | orchestrator | 2026-01-01 00:38:05.849834 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-01 00:38:05.849847 | orchestrator | Thursday 01 January 2026 00:37:43 +0000 (0:00:02.384) 0:00:04.660 ****** 2026-01-01 00:38:05.849862 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:05.849876 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:05.849889 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:05.849900 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:05.849911 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:05.849921 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:05.849932 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:05.849943 | orchestrator | 2026-01-01 00:38:05.849954 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-01 00:38:05.849966 | orchestrator | Thursday 01 January 2026 00:37:44 +0000 (0:00:01.337) 0:00:05.998 ****** 2026-01-01 00:38:05.849977 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:38:05.849988 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:38:05.849999 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:38:05.850009 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:38:05.850089 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:38:05.850100 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:38:05.850111 | orchestrator | ok: [testbed-manager] 2026-01-01 00:38:05.850122 | orchestrator | 2026-01-01 00:38:05.850133 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-01 00:38:05.850144 | orchestrator | Thursday 01 January 2026 00:37:47 +0000 (0:00:02.296) 0:00:08.295 ****** 2026-01-01 00:38:05.850155 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:38:05.850166 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:05.850177 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:38:05.850188 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:38:05.850199 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:38:05.850210 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:38:05.850221 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:38:05.850232 | orchestrator | 2026-01-01 00:38:05.850243 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-01 00:38:05.850254 | orchestrator | Thursday 01 January 2026 00:37:48 +0000 (0:00:00.957) 0:00:09.252 ****** 2026-01-01 00:38:05.850265 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:05.850276 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:05.850287 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:05.850298 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:05.850309 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:05.850320 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:05.850331 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:05.850342 | orchestrator | 2026-01-01 00:38:05.850373 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-01 00:38:05.850394 | orchestrator | Thursday 01 January 2026 00:38:02 +0000 (0:00:13.811) 0:00:23.064 ****** 2026-01-01 00:38:05.850424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:38:05.850462 | orchestrator | 2026-01-01 00:38:05.850482 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-01 00:38:05.850501 | orchestrator | Thursday 01 January 2026 00:38:03 +0000 (0:00:01.280) 0:00:24.344 ****** 2026-01-01 00:38:05.850519 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:38:05.850537 | orchestrator | changed: [testbed-manager] 2026-01-01 00:38:05.850557 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:38:05.850577 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:38:05.850637 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:38:05.850656 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:38:05.850674 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:38:05.850692 | orchestrator | 2026-01-01 00:38:05.850709 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:38:05.850727 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:38:05.850774 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:38:05.850795 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:38:05.850815 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:38:05.850834 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:38:05.850853 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:38:05.850872 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:38:05.850891 | orchestrator | 2026-01-01 00:38:05.850910 | orchestrator | 2026-01-01 00:38:05.850929 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:38:05.850947 | orchestrator | Thursday 01 January 2026 00:38:05 +0000 (0:00:02.032) 0:00:26.377 ****** 2026-01-01 00:38:05.850966 | orchestrator | =============================================================================== 2026-01-01 00:38:05.850986 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.81s 2026-01-01 00:38:05.851004 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.38s 2026-01-01 00:38:05.851021 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.30s 2026-01-01 00:38:05.851032 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.03s 2026-01-01 00:38:05.851043 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.34s 2026-01-01 00:38:05.851054 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2026-01-01 00:38:05.851065 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-01-01 00:38:05.851076 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.96s 2026-01-01 00:38:05.851087 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2026-01-01 00:38:06.208878 | orchestrator | ++ semver latest 7.1.1 2026-01-01 00:38:06.268855 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:38:06.268989 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:38:06.269006 | orchestrator | + sudo systemctl restart manager.service 2026-01-01 00:38:23.037494 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-01 00:38:23.037631 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-01 00:38:23.037648 | orchestrator | + local max_attempts=60 2026-01-01 00:38:23.037660 | orchestrator | + local name=ceph-ansible 2026-01-01 00:38:23.037699 | orchestrator | + local attempt_num=1 2026-01-01 00:38:23.037711 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:23.081631 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:23.081709 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:23.081725 | orchestrator | + sleep 5 2026-01-01 00:38:28.086714 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:28.124286 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:28.124356 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:28.124370 | orchestrator | + sleep 5 2026-01-01 00:38:33.128591 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:33.161046 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:33.161127 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:33.161141 | orchestrator | + sleep 5 2026-01-01 00:38:38.165031 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:38.203020 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:38.203115 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:38.203130 | orchestrator | + sleep 5 2026-01-01 00:38:43.209049 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:43.253564 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:43.253715 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:43.253730 | orchestrator | + sleep 5 2026-01-01 00:38:48.257794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:48.288510 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:48.288596 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:48.288670 | orchestrator | + sleep 5 2026-01-01 00:38:53.292954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:53.339410 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:53.339514 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:53.339528 | orchestrator | + sleep 5 2026-01-01 00:38:58.346282 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:38:58.380992 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:38:58.381112 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:38:58.381127 | orchestrator | + sleep 5 2026-01-01 00:39:03.383611 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:39:03.413541 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:03.413684 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:39:03.413703 | orchestrator | + sleep 5 2026-01-01 00:39:08.417152 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:39:08.454601 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:08.454785 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:39:08.454811 | orchestrator | + sleep 5 2026-01-01 00:39:13.459826 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:39:13.504058 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:13.504186 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:39:13.504214 | orchestrator | + sleep 5 2026-01-01 00:39:18.509056 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:39:18.551778 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:18.551866 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:39:18.551881 | orchestrator | + sleep 5 2026-01-01 00:39:23.555552 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:39:23.601293 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:23.601429 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-01 00:39:23.601444 | orchestrator | + sleep 5 2026-01-01 00:39:28.607415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-01 00:39:28.649753 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:28.649859 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-01 00:39:28.649884 | orchestrator | + local max_attempts=60 2026-01-01 00:39:28.649904 | orchestrator | + local name=kolla-ansible 2026-01-01 00:39:28.649922 | orchestrator | + local attempt_num=1 2026-01-01 00:39:28.651052 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-01 00:39:28.682478 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:28.682624 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-01 00:39:28.682676 | orchestrator | + local max_attempts=60 2026-01-01 00:39:28.682689 | orchestrator | + local name=osism-ansible 2026-01-01 00:39:28.682700 | orchestrator | + local attempt_num=1 2026-01-01 00:39:28.683018 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-01 00:39:28.715147 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-01 00:39:28.715236 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-01 00:39:28.715250 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-01 00:39:28.908927 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-01 00:39:29.089253 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-01 00:39:29.251293 | orchestrator | ARA in osism-ansible already disabled. 2026-01-01 00:39:29.424684 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-01 00:39:29.424951 | orchestrator | + osism apply gather-facts 2026-01-01 00:39:41.663511 | orchestrator | 2026-01-01 00:39:41 | INFO  | Task 3a45732f-1ad7-4b96-9fce-6e2173a87dd9 (gather-facts) was prepared for execution. 2026-01-01 00:39:41.663697 | orchestrator | 2026-01-01 00:39:41 | INFO  | It takes a moment until task 3a45732f-1ad7-4b96-9fce-6e2173a87dd9 (gather-facts) has been started and output is visible here. 2026-01-01 00:39:55.656146 | orchestrator | 2026-01-01 00:39:55.656244 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:39:55.656256 | orchestrator | 2026-01-01 00:39:55.656263 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:39:55.656271 | orchestrator | Thursday 01 January 2026 00:39:46 +0000 (0:00:00.225) 0:00:00.225 ****** 2026-01-01 00:39:55.656278 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:39:55.656286 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:39:55.656296 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:39:55.656307 | orchestrator | ok: [testbed-manager] 2026-01-01 00:39:55.656317 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:39:55.656328 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:39:55.656338 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:39:55.656348 | orchestrator | 2026-01-01 00:39:55.656357 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-01 00:39:55.656364 | orchestrator | 2026-01-01 00:39:55.656370 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-01 00:39:55.656377 | orchestrator | Thursday 01 January 2026 00:39:54 +0000 (0:00:08.539) 0:00:08.765 ****** 2026-01-01 00:39:55.656383 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:39:55.656391 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:39:55.656402 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:39:55.656413 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:39:55.656423 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:39:55.656434 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:39:55.656441 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:39:55.656448 | orchestrator | 2026-01-01 00:39:55.656454 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:39:55.656461 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656469 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656476 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656482 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656489 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656514 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656549 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 00:39:55.656560 | orchestrator | 2026-01-01 00:39:55.656569 | orchestrator | 2026-01-01 00:39:55.656575 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:39:55.656582 | orchestrator | Thursday 01 January 2026 00:39:55 +0000 (0:00:00.559) 0:00:09.324 ****** 2026-01-01 00:39:55.656588 | orchestrator | =============================================================================== 2026-01-01 00:39:55.656594 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.54s 2026-01-01 00:39:55.656601 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-01 00:39:56.002834 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-01 00:39:56.022625 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-01 00:39:56.041690 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-01 00:39:56.062679 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-01 00:39:56.081247 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-01 00:39:56.097050 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-01 00:39:56.112277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-01 00:39:56.132248 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-01 00:39:56.147743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-01 00:39:56.161134 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-01 00:39:56.183305 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-01 00:39:56.205123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-01 00:39:56.224755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-01 00:39:56.244997 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-01 00:39:56.264955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-01 00:39:56.287891 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-01 00:39:56.304989 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-01 00:39:56.325271 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-01 00:39:56.344445 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-01 00:39:56.358386 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-01 00:39:56.377265 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-01 00:39:56.813939 | orchestrator | ok: Runtime: 0:24:58.014047 2026-01-01 00:39:56.919561 | 2026-01-01 00:39:56.919730 | TASK [Deploy services] 2026-01-01 00:39:57.454477 | orchestrator | skipping: Conditional result was False 2026-01-01 00:39:57.473331 | 2026-01-01 00:39:57.473497 | TASK [Deploy in a nutshell] 2026-01-01 00:39:58.189628 | orchestrator | + set -e 2026-01-01 00:39:58.189905 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-01 00:39:58.189931 | orchestrator | ++ export INTERACTIVE=false 2026-01-01 00:39:58.189953 | orchestrator | ++ INTERACTIVE=false 2026-01-01 00:39:58.189967 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-01 00:39:58.189979 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-01 00:39:58.189994 | orchestrator | + source /opt/manager-vars.sh 2026-01-01 00:39:58.190082 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-01 00:39:58.190116 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-01 00:39:58.190131 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-01 00:39:58.190148 | orchestrator | ++ CEPH_VERSION=reef 2026-01-01 00:39:58.190160 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-01 00:39:58.190179 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-01 00:39:58.190191 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-01 00:39:58.190211 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-01 00:39:58.190222 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-01 00:39:58.190238 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-01 00:39:58.190249 | orchestrator | ++ export ARA=false 2026-01-01 00:39:58.190261 | orchestrator | ++ ARA=false 2026-01-01 00:39:58.190272 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-01 00:39:58.190288 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-01 00:39:58.190300 | orchestrator | ++ export TEMPEST=true 2026-01-01 00:39:58.190310 | orchestrator | ++ TEMPEST=true 2026-01-01 00:39:58.190321 | orchestrator | ++ export IS_ZUUL=true 2026-01-01 00:39:58.190333 | orchestrator | ++ IS_ZUUL=true 2026-01-01 00:39:58.190344 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:39:58.190355 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.183 2026-01-01 00:39:58.190366 | orchestrator | ++ export EXTERNAL_API=false 2026-01-01 00:39:58.190377 | orchestrator | ++ EXTERNAL_API=false 2026-01-01 00:39:58.190388 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-01 00:39:58.190399 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-01 00:39:58.190410 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-01 00:39:58.190420 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-01 00:39:58.190431 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-01 00:39:58.190443 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-01 00:39:58.190471 | orchestrator | 2026-01-01 00:39:58.190483 | orchestrator | # PULL IMAGES 2026-01-01 00:39:58.190494 | orchestrator | 2026-01-01 00:39:58.190505 | orchestrator | + echo 2026-01-01 00:39:58.190516 | orchestrator | + echo '# PULL IMAGES' 2026-01-01 00:39:58.190527 | orchestrator | + echo 2026-01-01 00:39:58.191192 | orchestrator | ++ semver latest 7.0.0 2026-01-01 00:39:58.252887 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-01 00:39:58.252998 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-01 00:39:58.253038 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-01 00:40:00.202146 | orchestrator | 2026-01-01 00:40:00 | INFO  | Trying to run play pull-images in environment custom 2026-01-01 00:40:10.343565 | orchestrator | 2026-01-01 00:40:10 | INFO  | Task 40f5d3df-f16c-4b5a-aaf6-9823ec37accb (pull-images) was prepared for execution. 2026-01-01 00:40:10.343792 | orchestrator | 2026-01-01 00:40:10 | INFO  | Task 40f5d3df-f16c-4b5a-aaf6-9823ec37accb is running in background. No more output. Check ARA for logs. 2026-01-01 00:40:12.849249 | orchestrator | 2026-01-01 00:40:12 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-01 00:40:22.962809 | orchestrator | 2026-01-01 00:40:22 | INFO  | Task b0d61735-0d18-4098-8ed0-913ad8e0ff92 (wipe-partitions) was prepared for execution. 2026-01-01 00:40:22.962963 | orchestrator | 2026-01-01 00:40:22 | INFO  | It takes a moment until task b0d61735-0d18-4098-8ed0-913ad8e0ff92 (wipe-partitions) has been started and output is visible here. 2026-01-01 00:40:36.121992 | orchestrator | 2026-01-01 00:40:36.122170 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-01 00:40:36.122187 | orchestrator | 2026-01-01 00:40:36.122199 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-01 00:40:36.122215 | orchestrator | Thursday 01 January 2026 00:40:27 +0000 (0:00:00.196) 0:00:00.196 ****** 2026-01-01 00:40:36.122229 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:36.122241 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:36.122252 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:36.122264 | orchestrator | 2026-01-01 00:40:36.122275 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-01 00:40:36.122311 | orchestrator | Thursday 01 January 2026 00:40:28 +0000 (0:00:00.644) 0:00:00.840 ****** 2026-01-01 00:40:36.122322 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:40:36.122334 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:40:36.122349 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:40:36.122360 | orchestrator | 2026-01-01 00:40:36.122371 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-01 00:40:36.122382 | orchestrator | Thursday 01 January 2026 00:40:28 +0000 (0:00:00.365) 0:00:01.206 ****** 2026-01-01 00:40:36.122393 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:40:36.122404 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:40:36.122415 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:40:36.122426 | orchestrator | 2026-01-01 00:40:36.122437 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-01 00:40:36.122448 | orchestrator | Thursday 01 January 2026 00:40:29 +0000 (0:00:00.564) 0:00:01.770 ****** 2026-01-01 00:40:36.122459 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:40:36.122470 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:40:36.122483 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:40:36.122495 | orchestrator | 2026-01-01 00:40:36.122509 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-01 00:40:36.122521 | orchestrator | Thursday 01 January 2026 00:40:29 +0000 (0:00:00.285) 0:00:02.056 ****** 2026-01-01 00:40:36.122535 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-01 00:40:36.122551 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-01 00:40:36.122564 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-01 00:40:36.122577 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-01 00:40:36.122590 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-01 00:40:36.122603 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-01 00:40:36.122615 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-01 00:40:36.122628 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-01 00:40:36.122640 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-01 00:40:36.122652 | orchestrator | 2026-01-01 00:40:36.122689 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-01 00:40:36.122704 | orchestrator | Thursday 01 January 2026 00:40:30 +0000 (0:00:01.172) 0:00:03.228 ****** 2026-01-01 00:40:36.122718 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-01 00:40:36.122730 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-01 00:40:36.122743 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-01 00:40:36.122757 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-01 00:40:36.122769 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-01 00:40:36.122782 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-01 00:40:36.122795 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-01 00:40:36.122807 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-01 00:40:36.122820 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-01 00:40:36.122833 | orchestrator | 2026-01-01 00:40:36.122845 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-01 00:40:36.122856 | orchestrator | Thursday 01 January 2026 00:40:32 +0000 (0:00:01.571) 0:00:04.800 ****** 2026-01-01 00:40:36.122867 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-01 00:40:36.122877 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-01 00:40:36.122888 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-01 00:40:36.122899 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-01 00:40:36.122909 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-01 00:40:36.122927 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-01 00:40:36.122939 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-01 00:40:36.122960 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-01 00:40:36.122972 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-01 00:40:36.122982 | orchestrator | 2026-01-01 00:40:36.122993 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-01 00:40:36.123004 | orchestrator | Thursday 01 January 2026 00:40:34 +0000 (0:00:02.078) 0:00:06.878 ****** 2026-01-01 00:40:36.123015 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:36.123026 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:36.123036 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:36.123047 | orchestrator | 2026-01-01 00:40:36.123058 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-01 00:40:36.123068 | orchestrator | Thursday 01 January 2026 00:40:35 +0000 (0:00:00.640) 0:00:07.519 ****** 2026-01-01 00:40:36.123079 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:40:36.123090 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:40:36.123101 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:40:36.123111 | orchestrator | 2026-01-01 00:40:36.123122 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:40:36.123135 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:36.123147 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:36.123176 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:40:36.123188 | orchestrator | 2026-01-01 00:40:36.123199 | orchestrator | 2026-01-01 00:40:36.123210 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:40:36.123221 | orchestrator | Thursday 01 January 2026 00:40:35 +0000 (0:00:00.629) 0:00:08.149 ****** 2026-01-01 00:40:36.123232 | orchestrator | =============================================================================== 2026-01-01 00:40:36.123243 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.08s 2026-01-01 00:40:36.123253 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-01-01 00:40:36.123264 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2026-01-01 00:40:36.123275 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.64s 2026-01-01 00:40:36.123285 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2026-01-01 00:40:36.123296 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-01-01 00:40:36.123307 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-01-01 00:40:36.123318 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-01-01 00:40:36.123329 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-01-01 00:40:48.632173 | orchestrator | 2026-01-01 00:40:48 | INFO  | Task 19d19b14-ea1b-4059-9e8a-b27c1b9066e7 (facts) was prepared for execution. 2026-01-01 00:40:48.632294 | orchestrator | 2026-01-01 00:40:48 | INFO  | It takes a moment until task 19d19b14-ea1b-4059-9e8a-b27c1b9066e7 (facts) has been started and output is visible here. 2026-01-01 00:41:01.492217 | orchestrator | 2026-01-01 00:41:01.492359 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-01 00:41:01.492377 | orchestrator | 2026-01-01 00:41:01.492390 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-01 00:41:01.492403 | orchestrator | Thursday 01 January 2026 00:40:53 +0000 (0:00:00.271) 0:00:00.271 ****** 2026-01-01 00:41:01.492414 | orchestrator | ok: [testbed-manager] 2026-01-01 00:41:01.492426 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:41:01.492437 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:41:01.492482 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:41:01.492493 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:01.492504 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:01.492514 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:01.492525 | orchestrator | 2026-01-01 00:41:01.492539 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-01 00:41:01.492550 | orchestrator | Thursday 01 January 2026 00:40:54 +0000 (0:00:01.141) 0:00:01.413 ****** 2026-01-01 00:41:01.492561 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:41:01.492572 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:41:01.492583 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:41:01.492593 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:41:01.492604 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:01.492614 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:01.492625 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:01.492636 | orchestrator | 2026-01-01 00:41:01.492646 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:41:01.492657 | orchestrator | 2026-01-01 00:41:01.492668 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:41:01.492702 | orchestrator | Thursday 01 January 2026 00:40:55 +0000 (0:00:01.324) 0:00:02.737 ****** 2026-01-01 00:41:01.492718 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:41:01.492731 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:41:01.492745 | orchestrator | ok: [testbed-manager] 2026-01-01 00:41:01.492758 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:41:01.492770 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:01.492783 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:01.492795 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:01.492809 | orchestrator | 2026-01-01 00:41:01.492821 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-01 00:41:01.492834 | orchestrator | 2026-01-01 00:41:01.492847 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-01 00:41:01.492880 | orchestrator | Thursday 01 January 2026 00:41:00 +0000 (0:00:04.967) 0:00:07.705 ****** 2026-01-01 00:41:01.492894 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:41:01.492907 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:41:01.492921 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:41:01.492933 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:41:01.492946 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:01.492958 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:01.492972 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:01.492984 | orchestrator | 2026-01-01 00:41:01.492997 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:41:01.493010 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493025 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493037 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493051 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493065 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493079 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493092 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:41:01.493104 | orchestrator | 2026-01-01 00:41:01.493122 | orchestrator | 2026-01-01 00:41:01.493133 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:41:01.493144 | orchestrator | Thursday 01 January 2026 00:41:01 +0000 (0:00:00.547) 0:00:08.253 ****** 2026-01-01 00:41:01.493155 | orchestrator | =============================================================================== 2026-01-01 00:41:01.493166 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.97s 2026-01-01 00:41:01.493177 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2026-01-01 00:41:01.493187 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-01-01 00:41:01.493198 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-01-01 00:41:04.013601 | orchestrator | 2026-01-01 00:41:04 | INFO  | Task 60105a67-18fc-462f-b1e0-6f344c4cb759 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-01 00:41:04.013770 | orchestrator | 2026-01-01 00:41:04 | INFO  | It takes a moment until task 60105a67-18fc-462f-b1e0-6f344c4cb759 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-01 00:41:16.405988 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 00:41:16.406170 | orchestrator | 2.16.14 2026-01-01 00:41:16.406189 | orchestrator | 2026-01-01 00:41:16.406202 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-01 00:41:16.406215 | orchestrator | 2026-01-01 00:41:16.406229 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:41:16.406242 | orchestrator | Thursday 01 January 2026 00:41:08 +0000 (0:00:00.353) 0:00:00.353 ****** 2026-01-01 00:41:16.406254 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-01 00:41:16.406266 | orchestrator | 2026-01-01 00:41:16.406277 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:41:16.406288 | orchestrator | Thursday 01 January 2026 00:41:09 +0000 (0:00:00.264) 0:00:00.618 ****** 2026-01-01 00:41:16.406299 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:16.406311 | orchestrator | 2026-01-01 00:41:16.406322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406333 | orchestrator | Thursday 01 January 2026 00:41:09 +0000 (0:00:00.236) 0:00:00.854 ****** 2026-01-01 00:41:16.406345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:41:16.406356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:41:16.406367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:41:16.406378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:41:16.406389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:41:16.406400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:41:16.406411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:41:16.406422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:41:16.406433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-01 00:41:16.406444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:41:16.406475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:41:16.406487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:41:16.406498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:41:16.406509 | orchestrator | 2026-01-01 00:41:16.406520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406555 | orchestrator | Thursday 01 January 2026 00:41:09 +0000 (0:00:00.564) 0:00:01.419 ****** 2026-01-01 00:41:16.406567 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406578 | orchestrator | 2026-01-01 00:41:16.406589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406600 | orchestrator | Thursday 01 January 2026 00:41:10 +0000 (0:00:00.215) 0:00:01.634 ****** 2026-01-01 00:41:16.406611 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406621 | orchestrator | 2026-01-01 00:41:16.406632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406643 | orchestrator | Thursday 01 January 2026 00:41:10 +0000 (0:00:00.193) 0:00:01.827 ****** 2026-01-01 00:41:16.406654 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406665 | orchestrator | 2026-01-01 00:41:16.406676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406722 | orchestrator | Thursday 01 January 2026 00:41:10 +0000 (0:00:00.276) 0:00:02.104 ****** 2026-01-01 00:41:16.406734 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406745 | orchestrator | 2026-01-01 00:41:16.406777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406801 | orchestrator | Thursday 01 January 2026 00:41:10 +0000 (0:00:00.202) 0:00:02.307 ****** 2026-01-01 00:41:16.406812 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406823 | orchestrator | 2026-01-01 00:41:16.406834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406845 | orchestrator | Thursday 01 January 2026 00:41:10 +0000 (0:00:00.191) 0:00:02.499 ****** 2026-01-01 00:41:16.406856 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406867 | orchestrator | 2026-01-01 00:41:16.406878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406889 | orchestrator | Thursday 01 January 2026 00:41:11 +0000 (0:00:00.202) 0:00:02.702 ****** 2026-01-01 00:41:16.406900 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406911 | orchestrator | 2026-01-01 00:41:16.406922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406933 | orchestrator | Thursday 01 January 2026 00:41:11 +0000 (0:00:00.204) 0:00:02.906 ****** 2026-01-01 00:41:16.406944 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.406955 | orchestrator | 2026-01-01 00:41:16.406966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.406977 | orchestrator | Thursday 01 January 2026 00:41:11 +0000 (0:00:00.209) 0:00:03.116 ****** 2026-01-01 00:41:16.406987 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91) 2026-01-01 00:41:16.407000 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91) 2026-01-01 00:41:16.407011 | orchestrator | 2026-01-01 00:41:16.407022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.407053 | orchestrator | Thursday 01 January 2026 00:41:11 +0000 (0:00:00.423) 0:00:03.540 ****** 2026-01-01 00:41:16.407065 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec) 2026-01-01 00:41:16.407076 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec) 2026-01-01 00:41:16.407087 | orchestrator | 2026-01-01 00:41:16.407098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.407109 | orchestrator | Thursday 01 January 2026 00:41:12 +0000 (0:00:00.646) 0:00:04.186 ****** 2026-01-01 00:41:16.407120 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486) 2026-01-01 00:41:16.407131 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486) 2026-01-01 00:41:16.407142 | orchestrator | 2026-01-01 00:41:16.407153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.407173 | orchestrator | Thursday 01 January 2026 00:41:13 +0000 (0:00:00.664) 0:00:04.851 ****** 2026-01-01 00:41:16.407184 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1) 2026-01-01 00:41:16.407195 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1) 2026-01-01 00:41:16.407206 | orchestrator | 2026-01-01 00:41:16.407217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:16.407228 | orchestrator | Thursday 01 January 2026 00:41:14 +0000 (0:00:00.932) 0:00:05.784 ****** 2026-01-01 00:41:16.407238 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:41:16.407249 | orchestrator | 2026-01-01 00:41:16.407266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407278 | orchestrator | Thursday 01 January 2026 00:41:14 +0000 (0:00:00.340) 0:00:06.124 ****** 2026-01-01 00:41:16.407288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:41:16.407299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:41:16.407310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:41:16.407321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:41:16.407331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:41:16.407342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:41:16.407353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:41:16.407364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:41:16.407374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-01 00:41:16.407385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:41:16.407396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:41:16.407407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:41:16.407417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:41:16.407428 | orchestrator | 2026-01-01 00:41:16.407439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407450 | orchestrator | Thursday 01 January 2026 00:41:14 +0000 (0:00:00.395) 0:00:06.519 ****** 2026-01-01 00:41:16.407461 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407472 | orchestrator | 2026-01-01 00:41:16.407483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407493 | orchestrator | Thursday 01 January 2026 00:41:15 +0000 (0:00:00.208) 0:00:06.728 ****** 2026-01-01 00:41:16.407504 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407515 | orchestrator | 2026-01-01 00:41:16.407526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407537 | orchestrator | Thursday 01 January 2026 00:41:15 +0000 (0:00:00.219) 0:00:06.948 ****** 2026-01-01 00:41:16.407547 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407558 | orchestrator | 2026-01-01 00:41:16.407569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407580 | orchestrator | Thursday 01 January 2026 00:41:15 +0000 (0:00:00.204) 0:00:07.153 ****** 2026-01-01 00:41:16.407591 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407602 | orchestrator | 2026-01-01 00:41:16.407613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407623 | orchestrator | Thursday 01 January 2026 00:41:15 +0000 (0:00:00.193) 0:00:07.347 ****** 2026-01-01 00:41:16.407641 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407652 | orchestrator | 2026-01-01 00:41:16.407663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407674 | orchestrator | Thursday 01 January 2026 00:41:15 +0000 (0:00:00.208) 0:00:07.555 ****** 2026-01-01 00:41:16.407836 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407866 | orchestrator | 2026-01-01 00:41:16.407877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:16.407888 | orchestrator | Thursday 01 January 2026 00:41:16 +0000 (0:00:00.234) 0:00:07.790 ****** 2026-01-01 00:41:16.407899 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:16.407910 | orchestrator | 2026-01-01 00:41:16.407933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:24.397947 | orchestrator | Thursday 01 January 2026 00:41:16 +0000 (0:00:00.201) 0:00:07.991 ****** 2026-01-01 00:41:24.398070 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398080 | orchestrator | 2026-01-01 00:41:24.398087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:24.398092 | orchestrator | Thursday 01 January 2026 00:41:16 +0000 (0:00:00.199) 0:00:08.191 ****** 2026-01-01 00:41:24.398098 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-01 00:41:24.398103 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-01 00:41:24.398108 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-01 00:41:24.398113 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-01 00:41:24.398117 | orchestrator | 2026-01-01 00:41:24.398122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:24.398127 | orchestrator | Thursday 01 January 2026 00:41:17 +0000 (0:00:01.061) 0:00:09.253 ****** 2026-01-01 00:41:24.398132 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398136 | orchestrator | 2026-01-01 00:41:24.398141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:24.398146 | orchestrator | Thursday 01 January 2026 00:41:17 +0000 (0:00:00.212) 0:00:09.466 ****** 2026-01-01 00:41:24.398150 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398155 | orchestrator | 2026-01-01 00:41:24.398159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:24.398164 | orchestrator | Thursday 01 January 2026 00:41:18 +0000 (0:00:00.205) 0:00:09.671 ****** 2026-01-01 00:41:24.398168 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398173 | orchestrator | 2026-01-01 00:41:24.398178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:24.398182 | orchestrator | Thursday 01 January 2026 00:41:18 +0000 (0:00:00.244) 0:00:09.916 ****** 2026-01-01 00:41:24.398187 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398191 | orchestrator | 2026-01-01 00:41:24.398196 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-01 00:41:24.398200 | orchestrator | Thursday 01 January 2026 00:41:18 +0000 (0:00:00.254) 0:00:10.171 ****** 2026-01-01 00:41:24.398205 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-01 00:41:24.398210 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-01 00:41:24.398215 | orchestrator | 2026-01-01 00:41:24.398237 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-01 00:41:24.398246 | orchestrator | Thursday 01 January 2026 00:41:18 +0000 (0:00:00.181) 0:00:10.352 ****** 2026-01-01 00:41:24.398253 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398261 | orchestrator | 2026-01-01 00:41:24.398269 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-01 00:41:24.398277 | orchestrator | Thursday 01 January 2026 00:41:18 +0000 (0:00:00.130) 0:00:10.483 ****** 2026-01-01 00:41:24.398285 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398293 | orchestrator | 2026-01-01 00:41:24.398301 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-01 00:41:24.398329 | orchestrator | Thursday 01 January 2026 00:41:19 +0000 (0:00:00.165) 0:00:10.648 ****** 2026-01-01 00:41:24.398335 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398340 | orchestrator | 2026-01-01 00:41:24.398344 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-01 00:41:24.398349 | orchestrator | Thursday 01 January 2026 00:41:19 +0000 (0:00:00.139) 0:00:10.788 ****** 2026-01-01 00:41:24.398353 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:24.398358 | orchestrator | 2026-01-01 00:41:24.398362 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-01 00:41:24.398367 | orchestrator | Thursday 01 January 2026 00:41:19 +0000 (0:00:00.149) 0:00:10.937 ****** 2026-01-01 00:41:24.398372 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '906f607d-f8ab-576d-9485-c345cfde3c80'}}) 2026-01-01 00:41:24.398377 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}}) 2026-01-01 00:41:24.398382 | orchestrator | 2026-01-01 00:41:24.398388 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-01 00:41:24.398396 | orchestrator | Thursday 01 January 2026 00:41:19 +0000 (0:00:00.175) 0:00:11.112 ****** 2026-01-01 00:41:24.398405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '906f607d-f8ab-576d-9485-c345cfde3c80'}})  2026-01-01 00:41:24.398417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}})  2026-01-01 00:41:24.398425 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398432 | orchestrator | 2026-01-01 00:41:24.398439 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-01 00:41:24.398447 | orchestrator | Thursday 01 January 2026 00:41:19 +0000 (0:00:00.163) 0:00:11.276 ****** 2026-01-01 00:41:24.398455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '906f607d-f8ab-576d-9485-c345cfde3c80'}})  2026-01-01 00:41:24.398463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}})  2026-01-01 00:41:24.398471 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398479 | orchestrator | 2026-01-01 00:41:24.398487 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-01 00:41:24.398493 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.381) 0:00:11.658 ****** 2026-01-01 00:41:24.398498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '906f607d-f8ab-576d-9485-c345cfde3c80'}})  2026-01-01 00:41:24.398516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}})  2026-01-01 00:41:24.398521 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398526 | orchestrator | 2026-01-01 00:41:24.398531 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-01 00:41:24.398541 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.156) 0:00:11.814 ****** 2026-01-01 00:41:24.398546 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:24.398552 | orchestrator | 2026-01-01 00:41:24.398558 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-01 00:41:24.398566 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.150) 0:00:11.965 ****** 2026-01-01 00:41:24.398573 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:41:24.398581 | orchestrator | 2026-01-01 00:41:24.398588 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-01 00:41:24.398596 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.151) 0:00:12.117 ****** 2026-01-01 00:41:24.398603 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398610 | orchestrator | 2026-01-01 00:41:24.398618 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-01 00:41:24.398626 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.131) 0:00:12.248 ****** 2026-01-01 00:41:24.398640 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398647 | orchestrator | 2026-01-01 00:41:24.398654 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-01 00:41:24.398662 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.152) 0:00:12.400 ****** 2026-01-01 00:41:24.398669 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398677 | orchestrator | 2026-01-01 00:41:24.398700 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-01 00:41:24.398707 | orchestrator | Thursday 01 January 2026 00:41:20 +0000 (0:00:00.145) 0:00:12.546 ****** 2026-01-01 00:41:24.398712 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:41:24.398717 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:41:24.398721 | orchestrator |  "sdb": { 2026-01-01 00:41:24.398726 | orchestrator |  "osd_lvm_uuid": "906f607d-f8ab-576d-9485-c345cfde3c80" 2026-01-01 00:41:24.398731 | orchestrator |  }, 2026-01-01 00:41:24.398736 | orchestrator |  "sdc": { 2026-01-01 00:41:24.398740 | orchestrator |  "osd_lvm_uuid": "27db58f4-0fe4-54a7-94bd-e6fe47c26f99" 2026-01-01 00:41:24.398745 | orchestrator |  } 2026-01-01 00:41:24.398749 | orchestrator |  } 2026-01-01 00:41:24.398754 | orchestrator | } 2026-01-01 00:41:24.398759 | orchestrator | 2026-01-01 00:41:24.398763 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-01 00:41:24.398768 | orchestrator | Thursday 01 January 2026 00:41:21 +0000 (0:00:00.158) 0:00:12.704 ****** 2026-01-01 00:41:24.398772 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398777 | orchestrator | 2026-01-01 00:41:24.398781 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-01 00:41:24.398786 | orchestrator | Thursday 01 January 2026 00:41:21 +0000 (0:00:00.126) 0:00:12.831 ****** 2026-01-01 00:41:24.398790 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398795 | orchestrator | 2026-01-01 00:41:24.398799 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-01 00:41:24.398804 | orchestrator | Thursday 01 January 2026 00:41:21 +0000 (0:00:00.145) 0:00:12.976 ****** 2026-01-01 00:41:24.398808 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:41:24.398813 | orchestrator | 2026-01-01 00:41:24.398817 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-01 00:41:24.398822 | orchestrator | Thursday 01 January 2026 00:41:21 +0000 (0:00:00.145) 0:00:13.121 ****** 2026-01-01 00:41:24.398826 | orchestrator | changed: [testbed-node-3] => { 2026-01-01 00:41:24.398831 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-01 00:41:24.398835 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:41:24.398840 | orchestrator |  "sdb": { 2026-01-01 00:41:24.398844 | orchestrator |  "osd_lvm_uuid": "906f607d-f8ab-576d-9485-c345cfde3c80" 2026-01-01 00:41:24.398849 | orchestrator |  }, 2026-01-01 00:41:24.398853 | orchestrator |  "sdc": { 2026-01-01 00:41:24.398858 | orchestrator |  "osd_lvm_uuid": "27db58f4-0fe4-54a7-94bd-e6fe47c26f99" 2026-01-01 00:41:24.398862 | orchestrator |  } 2026-01-01 00:41:24.398867 | orchestrator |  }, 2026-01-01 00:41:24.398871 | orchestrator |  "lvm_volumes": [ 2026-01-01 00:41:24.398876 | orchestrator |  { 2026-01-01 00:41:24.398880 | orchestrator |  "data": "osd-block-906f607d-f8ab-576d-9485-c345cfde3c80", 2026-01-01 00:41:24.398885 | orchestrator |  "data_vg": "ceph-906f607d-f8ab-576d-9485-c345cfde3c80" 2026-01-01 00:41:24.398889 | orchestrator |  }, 2026-01-01 00:41:24.398894 | orchestrator |  { 2026-01-01 00:41:24.398898 | orchestrator |  "data": "osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99", 2026-01-01 00:41:24.398903 | orchestrator |  "data_vg": "ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99" 2026-01-01 00:41:24.398911 | orchestrator |  } 2026-01-01 00:41:24.398916 | orchestrator |  ] 2026-01-01 00:41:24.398920 | orchestrator |  } 2026-01-01 00:41:24.398930 | orchestrator | } 2026-01-01 00:41:24.398934 | orchestrator | 2026-01-01 00:41:24.398939 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-01 00:41:24.398943 | orchestrator | Thursday 01 January 2026 00:41:21 +0000 (0:00:00.427) 0:00:13.549 ****** 2026-01-01 00:41:24.398948 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-01 00:41:24.398952 | orchestrator | 2026-01-01 00:41:24.398957 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-01 00:41:24.398961 | orchestrator | 2026-01-01 00:41:24.398966 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:41:24.398970 | orchestrator | Thursday 01 January 2026 00:41:23 +0000 (0:00:01.899) 0:00:15.449 ****** 2026-01-01 00:41:24.398975 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-01 00:41:24.398979 | orchestrator | 2026-01-01 00:41:24.398984 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:41:24.398988 | orchestrator | Thursday 01 January 2026 00:41:24 +0000 (0:00:00.272) 0:00:15.722 ****** 2026-01-01 00:41:24.398996 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:24.399004 | orchestrator | 2026-01-01 00:41:24.399016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.796949 | orchestrator | Thursday 01 January 2026 00:41:24 +0000 (0:00:00.260) 0:00:15.983 ****** 2026-01-01 00:41:32.797094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:41:32.797122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:41:32.797141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:41:32.797160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:41:32.797179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:41:32.797199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:41:32.797219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:41:32.797237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:41:32.797256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-01 00:41:32.797274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:41:32.797293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:41:32.797316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:41:32.797336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:41:32.797356 | orchestrator | 2026-01-01 00:41:32.797379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797398 | orchestrator | Thursday 01 January 2026 00:41:24 +0000 (0:00:00.402) 0:00:16.385 ****** 2026-01-01 00:41:32.797418 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.797440 | orchestrator | 2026-01-01 00:41:32.797461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797481 | orchestrator | Thursday 01 January 2026 00:41:25 +0000 (0:00:00.228) 0:00:16.613 ****** 2026-01-01 00:41:32.797502 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.797523 | orchestrator | 2026-01-01 00:41:32.797542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797561 | orchestrator | Thursday 01 January 2026 00:41:25 +0000 (0:00:00.218) 0:00:16.832 ****** 2026-01-01 00:41:32.797579 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.797598 | orchestrator | 2026-01-01 00:41:32.797618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797676 | orchestrator | Thursday 01 January 2026 00:41:25 +0000 (0:00:00.247) 0:00:17.079 ****** 2026-01-01 00:41:32.797745 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.797766 | orchestrator | 2026-01-01 00:41:32.797785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797804 | orchestrator | Thursday 01 January 2026 00:41:25 +0000 (0:00:00.205) 0:00:17.284 ****** 2026-01-01 00:41:32.797823 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.797843 | orchestrator | 2026-01-01 00:41:32.797863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797882 | orchestrator | Thursday 01 January 2026 00:41:26 +0000 (0:00:00.657) 0:00:17.942 ****** 2026-01-01 00:41:32.797901 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.797919 | orchestrator | 2026-01-01 00:41:32.797958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.797977 | orchestrator | Thursday 01 January 2026 00:41:26 +0000 (0:00:00.215) 0:00:18.158 ****** 2026-01-01 00:41:32.797996 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798087 | orchestrator | 2026-01-01 00:41:32.798114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.798164 | orchestrator | Thursday 01 January 2026 00:41:26 +0000 (0:00:00.211) 0:00:18.370 ****** 2026-01-01 00:41:32.798176 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798186 | orchestrator | 2026-01-01 00:41:32.798197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.798208 | orchestrator | Thursday 01 January 2026 00:41:26 +0000 (0:00:00.212) 0:00:18.582 ****** 2026-01-01 00:41:32.798219 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea) 2026-01-01 00:41:32.798231 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea) 2026-01-01 00:41:32.798242 | orchestrator | 2026-01-01 00:41:32.798252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.798263 | orchestrator | Thursday 01 January 2026 00:41:27 +0000 (0:00:00.463) 0:00:19.046 ****** 2026-01-01 00:41:32.798273 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0) 2026-01-01 00:41:32.798284 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0) 2026-01-01 00:41:32.798318 | orchestrator | 2026-01-01 00:41:32.798329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.798340 | orchestrator | Thursday 01 January 2026 00:41:27 +0000 (0:00:00.431) 0:00:19.478 ****** 2026-01-01 00:41:32.798351 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1) 2026-01-01 00:41:32.798361 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1) 2026-01-01 00:41:32.798372 | orchestrator | 2026-01-01 00:41:32.798383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.798418 | orchestrator | Thursday 01 January 2026 00:41:28 +0000 (0:00:00.471) 0:00:19.949 ****** 2026-01-01 00:41:32.798429 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e) 2026-01-01 00:41:32.798440 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e) 2026-01-01 00:41:32.798456 | orchestrator | 2026-01-01 00:41:32.798475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:32.798492 | orchestrator | Thursday 01 January 2026 00:41:28 +0000 (0:00:00.447) 0:00:20.397 ****** 2026-01-01 00:41:32.798508 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:41:32.798519 | orchestrator | 2026-01-01 00:41:32.798529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798540 | orchestrator | Thursday 01 January 2026 00:41:29 +0000 (0:00:00.340) 0:00:20.738 ****** 2026-01-01 00:41:32.798566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:41:32.798578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:41:32.798587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:41:32.798597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:41:32.798606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:41:32.798615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:41:32.798625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:41:32.798634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:41:32.798643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-01 00:41:32.798653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:41:32.798662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:41:32.798671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:41:32.798681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:41:32.798727 | orchestrator | 2026-01-01 00:41:32.798738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798747 | orchestrator | Thursday 01 January 2026 00:41:29 +0000 (0:00:00.401) 0:00:21.139 ****** 2026-01-01 00:41:32.798757 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798766 | orchestrator | 2026-01-01 00:41:32.798775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798793 | orchestrator | Thursday 01 January 2026 00:41:30 +0000 (0:00:00.754) 0:00:21.894 ****** 2026-01-01 00:41:32.798803 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798812 | orchestrator | 2026-01-01 00:41:32.798822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798831 | orchestrator | Thursday 01 January 2026 00:41:30 +0000 (0:00:00.208) 0:00:22.103 ****** 2026-01-01 00:41:32.798840 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798850 | orchestrator | 2026-01-01 00:41:32.798860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798869 | orchestrator | Thursday 01 January 2026 00:41:30 +0000 (0:00:00.199) 0:00:22.303 ****** 2026-01-01 00:41:32.798878 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798888 | orchestrator | 2026-01-01 00:41:32.798897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798907 | orchestrator | Thursday 01 January 2026 00:41:30 +0000 (0:00:00.190) 0:00:22.493 ****** 2026-01-01 00:41:32.798916 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798926 | orchestrator | 2026-01-01 00:41:32.798935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798944 | orchestrator | Thursday 01 January 2026 00:41:31 +0000 (0:00:00.187) 0:00:22.681 ****** 2026-01-01 00:41:32.798954 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.798963 | orchestrator | 2026-01-01 00:41:32.798972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.798982 | orchestrator | Thursday 01 January 2026 00:41:31 +0000 (0:00:00.203) 0:00:22.884 ****** 2026-01-01 00:41:32.798991 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.799000 | orchestrator | 2026-01-01 00:41:32.799010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.799019 | orchestrator | Thursday 01 January 2026 00:41:31 +0000 (0:00:00.207) 0:00:23.091 ****** 2026-01-01 00:41:32.799035 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:32.799044 | orchestrator | 2026-01-01 00:41:32.799054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.799063 | orchestrator | Thursday 01 January 2026 00:41:31 +0000 (0:00:00.207) 0:00:23.298 ****** 2026-01-01 00:41:32.799099 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-01 00:41:32.799110 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-01 00:41:32.799120 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-01 00:41:32.799129 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-01 00:41:32.799145 | orchestrator | 2026-01-01 00:41:32.799161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:32.799176 | orchestrator | Thursday 01 January 2026 00:41:32 +0000 (0:00:00.882) 0:00:24.181 ****** 2026-01-01 00:41:32.799192 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064024 | orchestrator | 2026-01-01 00:41:39.064140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:39.064157 | orchestrator | Thursday 01 January 2026 00:41:32 +0000 (0:00:00.206) 0:00:24.388 ****** 2026-01-01 00:41:39.064170 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064182 | orchestrator | 2026-01-01 00:41:39.064193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:39.064204 | orchestrator | Thursday 01 January 2026 00:41:33 +0000 (0:00:00.212) 0:00:24.600 ****** 2026-01-01 00:41:39.064216 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064226 | orchestrator | 2026-01-01 00:41:39.064237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:39.064248 | orchestrator | Thursday 01 January 2026 00:41:33 +0000 (0:00:00.236) 0:00:24.836 ****** 2026-01-01 00:41:39.064259 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064270 | orchestrator | 2026-01-01 00:41:39.064281 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-01 00:41:39.064292 | orchestrator | Thursday 01 January 2026 00:41:33 +0000 (0:00:00.561) 0:00:25.398 ****** 2026-01-01 00:41:39.064303 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-01 00:41:39.064314 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-01 00:41:39.064324 | orchestrator | 2026-01-01 00:41:39.064335 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-01 00:41:39.064346 | orchestrator | Thursday 01 January 2026 00:41:33 +0000 (0:00:00.174) 0:00:25.573 ****** 2026-01-01 00:41:39.064356 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064368 | orchestrator | 2026-01-01 00:41:39.064379 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-01 00:41:39.064390 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.107) 0:00:25.680 ****** 2026-01-01 00:41:39.064401 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064411 | orchestrator | 2026-01-01 00:41:39.064422 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-01 00:41:39.064433 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.114) 0:00:25.795 ****** 2026-01-01 00:41:39.064444 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064454 | orchestrator | 2026-01-01 00:41:39.064465 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-01 00:41:39.064476 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.112) 0:00:25.908 ****** 2026-01-01 00:41:39.064487 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:39.064498 | orchestrator | 2026-01-01 00:41:39.064509 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-01 00:41:39.064520 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.131) 0:00:26.040 ****** 2026-01-01 00:41:39.064532 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4f4651f5-78d1-505d-b741-249c77d228e7'}}) 2026-01-01 00:41:39.064543 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5dc050d-fe50-5167-b35b-32fd51d3d555'}}) 2026-01-01 00:41:39.064580 | orchestrator | 2026-01-01 00:41:39.064594 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-01 00:41:39.064607 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.172) 0:00:26.213 ****** 2026-01-01 00:41:39.064621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4f4651f5-78d1-505d-b741-249c77d228e7'}})  2026-01-01 00:41:39.064652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5dc050d-fe50-5167-b35b-32fd51d3d555'}})  2026-01-01 00:41:39.064667 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064679 | orchestrator | 2026-01-01 00:41:39.064714 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-01 00:41:39.064727 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.124) 0:00:26.337 ****** 2026-01-01 00:41:39.064740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4f4651f5-78d1-505d-b741-249c77d228e7'}})  2026-01-01 00:41:39.064753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5dc050d-fe50-5167-b35b-32fd51d3d555'}})  2026-01-01 00:41:39.064766 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064779 | orchestrator | 2026-01-01 00:41:39.064792 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-01 00:41:39.064805 | orchestrator | Thursday 01 January 2026 00:41:34 +0000 (0:00:00.139) 0:00:26.477 ****** 2026-01-01 00:41:39.064818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4f4651f5-78d1-505d-b741-249c77d228e7'}})  2026-01-01 00:41:39.064831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5dc050d-fe50-5167-b35b-32fd51d3d555'}})  2026-01-01 00:41:39.064844 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.064856 | orchestrator | 2026-01-01 00:41:39.064868 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-01 00:41:39.064881 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.154) 0:00:26.632 ****** 2026-01-01 00:41:39.064893 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:39.064907 | orchestrator | 2026-01-01 00:41:39.064919 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-01 00:41:39.064930 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.144) 0:00:26.776 ****** 2026-01-01 00:41:39.064941 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:41:39.064951 | orchestrator | 2026-01-01 00:41:39.064962 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-01 00:41:39.064973 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.137) 0:00:26.914 ****** 2026-01-01 00:41:39.065002 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.065013 | orchestrator | 2026-01-01 00:41:39.065024 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-01 00:41:39.065035 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.245) 0:00:27.160 ****** 2026-01-01 00:41:39.065046 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.065057 | orchestrator | 2026-01-01 00:41:39.065067 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-01 00:41:39.065078 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.133) 0:00:27.293 ****** 2026-01-01 00:41:39.065089 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.065100 | orchestrator | 2026-01-01 00:41:39.065111 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-01 00:41:39.065121 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.115) 0:00:27.409 ****** 2026-01-01 00:41:39.065132 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:41:39.065143 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:41:39.065153 | orchestrator |  "sdb": { 2026-01-01 00:41:39.065165 | orchestrator |  "osd_lvm_uuid": "4f4651f5-78d1-505d-b741-249c77d228e7" 2026-01-01 00:41:39.065185 | orchestrator |  }, 2026-01-01 00:41:39.065196 | orchestrator |  "sdc": { 2026-01-01 00:41:39.065206 | orchestrator |  "osd_lvm_uuid": "e5dc050d-fe50-5167-b35b-32fd51d3d555" 2026-01-01 00:41:39.065217 | orchestrator |  } 2026-01-01 00:41:39.065228 | orchestrator |  } 2026-01-01 00:41:39.065239 | orchestrator | } 2026-01-01 00:41:39.065250 | orchestrator | 2026-01-01 00:41:39.065261 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-01 00:41:39.065271 | orchestrator | Thursday 01 January 2026 00:41:35 +0000 (0:00:00.121) 0:00:27.530 ****** 2026-01-01 00:41:39.065282 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.065293 | orchestrator | 2026-01-01 00:41:39.065303 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-01 00:41:39.065314 | orchestrator | Thursday 01 January 2026 00:41:36 +0000 (0:00:00.125) 0:00:27.656 ****** 2026-01-01 00:41:39.065325 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.065335 | orchestrator | 2026-01-01 00:41:39.065346 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-01 00:41:39.065357 | orchestrator | Thursday 01 January 2026 00:41:36 +0000 (0:00:00.115) 0:00:27.771 ****** 2026-01-01 00:41:39.065367 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:41:39.065378 | orchestrator | 2026-01-01 00:41:39.065388 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-01 00:41:39.065399 | orchestrator | Thursday 01 January 2026 00:41:36 +0000 (0:00:00.117) 0:00:27.889 ****** 2026-01-01 00:41:39.065410 | orchestrator | changed: [testbed-node-4] => { 2026-01-01 00:41:39.065421 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-01 00:41:39.065432 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:41:39.065443 | orchestrator |  "sdb": { 2026-01-01 00:41:39.065454 | orchestrator |  "osd_lvm_uuid": "4f4651f5-78d1-505d-b741-249c77d228e7" 2026-01-01 00:41:39.065465 | orchestrator |  }, 2026-01-01 00:41:39.065475 | orchestrator |  "sdc": { 2026-01-01 00:41:39.065486 | orchestrator |  "osd_lvm_uuid": "e5dc050d-fe50-5167-b35b-32fd51d3d555" 2026-01-01 00:41:39.065497 | orchestrator |  } 2026-01-01 00:41:39.065508 | orchestrator |  }, 2026-01-01 00:41:39.065518 | orchestrator |  "lvm_volumes": [ 2026-01-01 00:41:39.065529 | orchestrator |  { 2026-01-01 00:41:39.065540 | orchestrator |  "data": "osd-block-4f4651f5-78d1-505d-b741-249c77d228e7", 2026-01-01 00:41:39.065550 | orchestrator |  "data_vg": "ceph-4f4651f5-78d1-505d-b741-249c77d228e7" 2026-01-01 00:41:39.065561 | orchestrator |  }, 2026-01-01 00:41:39.065572 | orchestrator |  { 2026-01-01 00:41:39.065583 | orchestrator |  "data": "osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555", 2026-01-01 00:41:39.065593 | orchestrator |  "data_vg": "ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555" 2026-01-01 00:41:39.065604 | orchestrator |  } 2026-01-01 00:41:39.065615 | orchestrator |  ] 2026-01-01 00:41:39.065625 | orchestrator |  } 2026-01-01 00:41:39.065636 | orchestrator | } 2026-01-01 00:41:39.065647 | orchestrator | 2026-01-01 00:41:39.065657 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-01 00:41:39.065668 | orchestrator | Thursday 01 January 2026 00:41:36 +0000 (0:00:00.199) 0:00:28.088 ****** 2026-01-01 00:41:39.065679 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-01 00:41:39.065731 | orchestrator | 2026-01-01 00:41:39.065743 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-01 00:41:39.065754 | orchestrator | 2026-01-01 00:41:39.065765 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:41:39.065775 | orchestrator | Thursday 01 January 2026 00:41:37 +0000 (0:00:01.054) 0:00:29.143 ****** 2026-01-01 00:41:39.065786 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-01 00:41:39.065797 | orchestrator | 2026-01-01 00:41:39.065808 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:41:39.065833 | orchestrator | Thursday 01 January 2026 00:41:38 +0000 (0:00:00.776) 0:00:29.919 ****** 2026-01-01 00:41:39.065844 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:39.065855 | orchestrator | 2026-01-01 00:41:39.065866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:39.065877 | orchestrator | Thursday 01 January 2026 00:41:38 +0000 (0:00:00.311) 0:00:30.231 ****** 2026-01-01 00:41:39.065887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:41:39.065898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:41:39.065909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:41:39.065919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:41:39.065930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:41:39.065949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:41:47.433341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:41:47.433454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:41:47.433470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-01 00:41:47.433483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:41:47.433494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:41:47.433505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:41:47.433516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:41:47.433527 | orchestrator | 2026-01-01 00:41:47.433540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433551 | orchestrator | Thursday 01 January 2026 00:41:39 +0000 (0:00:00.415) 0:00:30.647 ****** 2026-01-01 00:41:47.433562 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433574 | orchestrator | 2026-01-01 00:41:47.433585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433596 | orchestrator | Thursday 01 January 2026 00:41:39 +0000 (0:00:00.225) 0:00:30.872 ****** 2026-01-01 00:41:47.433607 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433618 | orchestrator | 2026-01-01 00:41:47.433629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433640 | orchestrator | Thursday 01 January 2026 00:41:39 +0000 (0:00:00.238) 0:00:31.110 ****** 2026-01-01 00:41:47.433650 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433661 | orchestrator | 2026-01-01 00:41:47.433672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433683 | orchestrator | Thursday 01 January 2026 00:41:39 +0000 (0:00:00.218) 0:00:31.328 ****** 2026-01-01 00:41:47.433789 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433805 | orchestrator | 2026-01-01 00:41:47.433816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433827 | orchestrator | Thursday 01 January 2026 00:41:39 +0000 (0:00:00.231) 0:00:31.560 ****** 2026-01-01 00:41:47.433838 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433849 | orchestrator | 2026-01-01 00:41:47.433863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433876 | orchestrator | Thursday 01 January 2026 00:41:40 +0000 (0:00:00.221) 0:00:31.781 ****** 2026-01-01 00:41:47.433888 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433901 | orchestrator | 2026-01-01 00:41:47.433914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.433950 | orchestrator | Thursday 01 January 2026 00:41:40 +0000 (0:00:00.222) 0:00:32.004 ****** 2026-01-01 00:41:47.433964 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.433977 | orchestrator | 2026-01-01 00:41:47.433992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.434011 | orchestrator | Thursday 01 January 2026 00:41:40 +0000 (0:00:00.248) 0:00:32.252 ****** 2026-01-01 00:41:47.434113 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434134 | orchestrator | 2026-01-01 00:41:47.434152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.434170 | orchestrator | Thursday 01 January 2026 00:41:40 +0000 (0:00:00.217) 0:00:32.470 ****** 2026-01-01 00:41:47.434186 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44) 2026-01-01 00:41:47.434206 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44) 2026-01-01 00:41:47.434223 | orchestrator | 2026-01-01 00:41:47.434241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.434259 | orchestrator | Thursday 01 January 2026 00:41:41 +0000 (0:00:01.090) 0:00:33.560 ****** 2026-01-01 00:41:47.434276 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d) 2026-01-01 00:41:47.434294 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d) 2026-01-01 00:41:47.434315 | orchestrator | 2026-01-01 00:41:47.434335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.434354 | orchestrator | Thursday 01 January 2026 00:41:42 +0000 (0:00:00.612) 0:00:34.173 ****** 2026-01-01 00:41:47.434370 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0) 2026-01-01 00:41:47.434381 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0) 2026-01-01 00:41:47.434392 | orchestrator | 2026-01-01 00:41:47.434403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.434414 | orchestrator | Thursday 01 January 2026 00:41:43 +0000 (0:00:00.459) 0:00:34.633 ****** 2026-01-01 00:41:47.434424 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb) 2026-01-01 00:41:47.434435 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb) 2026-01-01 00:41:47.434446 | orchestrator | 2026-01-01 00:41:47.434456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:41:47.434467 | orchestrator | Thursday 01 January 2026 00:41:43 +0000 (0:00:00.413) 0:00:35.046 ****** 2026-01-01 00:41:47.434477 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:41:47.434488 | orchestrator | 2026-01-01 00:41:47.434499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434530 | orchestrator | Thursday 01 January 2026 00:41:43 +0000 (0:00:00.305) 0:00:35.352 ****** 2026-01-01 00:41:47.434542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:41:47.434552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:41:47.434563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:41:47.434574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:41:47.434584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:41:47.434611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:41:47.434623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:41:47.434634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:41:47.434657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-01 00:41:47.434668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:41:47.434678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:41:47.434689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:41:47.434723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:41:47.434734 | orchestrator | 2026-01-01 00:41:47.434745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434755 | orchestrator | Thursday 01 January 2026 00:41:44 +0000 (0:00:00.367) 0:00:35.719 ****** 2026-01-01 00:41:47.434766 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434777 | orchestrator | 2026-01-01 00:41:47.434788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434798 | orchestrator | Thursday 01 January 2026 00:41:44 +0000 (0:00:00.314) 0:00:36.034 ****** 2026-01-01 00:41:47.434809 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434820 | orchestrator | 2026-01-01 00:41:47.434830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434847 | orchestrator | Thursday 01 January 2026 00:41:44 +0000 (0:00:00.263) 0:00:36.297 ****** 2026-01-01 00:41:47.434858 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434869 | orchestrator | 2026-01-01 00:41:47.434880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434891 | orchestrator | Thursday 01 January 2026 00:41:44 +0000 (0:00:00.177) 0:00:36.475 ****** 2026-01-01 00:41:47.434902 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434913 | orchestrator | 2026-01-01 00:41:47.434923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434934 | orchestrator | Thursday 01 January 2026 00:41:45 +0000 (0:00:00.169) 0:00:36.645 ****** 2026-01-01 00:41:47.434945 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434955 | orchestrator | 2026-01-01 00:41:47.434966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.434977 | orchestrator | Thursday 01 January 2026 00:41:45 +0000 (0:00:00.162) 0:00:36.807 ****** 2026-01-01 00:41:47.434988 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.434998 | orchestrator | 2026-01-01 00:41:47.435009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435020 | orchestrator | Thursday 01 January 2026 00:41:45 +0000 (0:00:00.516) 0:00:37.324 ****** 2026-01-01 00:41:47.435031 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.435042 | orchestrator | 2026-01-01 00:41:47.435052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435063 | orchestrator | Thursday 01 January 2026 00:41:45 +0000 (0:00:00.193) 0:00:37.518 ****** 2026-01-01 00:41:47.435074 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.435084 | orchestrator | 2026-01-01 00:41:47.435095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435106 | orchestrator | Thursday 01 January 2026 00:41:46 +0000 (0:00:00.169) 0:00:37.687 ****** 2026-01-01 00:41:47.435116 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-01 00:41:47.435127 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-01 00:41:47.435138 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-01 00:41:47.435149 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-01 00:41:47.435160 | orchestrator | 2026-01-01 00:41:47.435171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435181 | orchestrator | Thursday 01 January 2026 00:41:46 +0000 (0:00:00.645) 0:00:38.333 ****** 2026-01-01 00:41:47.435192 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.435209 | orchestrator | 2026-01-01 00:41:47.435220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435231 | orchestrator | Thursday 01 January 2026 00:41:46 +0000 (0:00:00.196) 0:00:38.529 ****** 2026-01-01 00:41:47.435242 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.435253 | orchestrator | 2026-01-01 00:41:47.435263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435274 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.185) 0:00:38.715 ****** 2026-01-01 00:41:47.435285 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.435295 | orchestrator | 2026-01-01 00:41:47.435306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:41:47.435317 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.161) 0:00:38.876 ****** 2026-01-01 00:41:47.435328 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:47.435338 | orchestrator | 2026-01-01 00:41:47.435356 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-01 00:41:50.738641 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.144) 0:00:39.021 ****** 2026-01-01 00:41:50.738850 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-01 00:41:50.738883 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-01 00:41:50.738905 | orchestrator | 2026-01-01 00:41:50.738920 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-01 00:41:50.738932 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.123) 0:00:39.145 ****** 2026-01-01 00:41:50.738943 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.738955 | orchestrator | 2026-01-01 00:41:50.738966 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-01 00:41:50.738977 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.089) 0:00:39.234 ****** 2026-01-01 00:41:50.738988 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.738999 | orchestrator | 2026-01-01 00:41:50.739009 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-01 00:41:50.739020 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.092) 0:00:39.326 ****** 2026-01-01 00:41:50.739031 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739041 | orchestrator | 2026-01-01 00:41:50.739052 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-01 00:41:50.739063 | orchestrator | Thursday 01 January 2026 00:41:47 +0000 (0:00:00.233) 0:00:39.560 ****** 2026-01-01 00:41:50.739074 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:50.739085 | orchestrator | 2026-01-01 00:41:50.739097 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-01 00:41:50.739107 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.122) 0:00:39.683 ****** 2026-01-01 00:41:50.739119 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '21a5f53a-dc04-53e0-afe9-de267ba79db4'}}) 2026-01-01 00:41:50.739133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b87804f1-5161-5843-851c-861f025ab6ce'}}) 2026-01-01 00:41:50.739152 | orchestrator | 2026-01-01 00:41:50.739170 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-01 00:41:50.739189 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.125) 0:00:39.809 ****** 2026-01-01 00:41:50.739211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '21a5f53a-dc04-53e0-afe9-de267ba79db4'}})  2026-01-01 00:41:50.739233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b87804f1-5161-5843-851c-861f025ab6ce'}})  2026-01-01 00:41:50.739251 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739265 | orchestrator | 2026-01-01 00:41:50.739279 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-01 00:41:50.739292 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.123) 0:00:39.932 ****** 2026-01-01 00:41:50.739331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '21a5f53a-dc04-53e0-afe9-de267ba79db4'}})  2026-01-01 00:41:50.739343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b87804f1-5161-5843-851c-861f025ab6ce'}})  2026-01-01 00:41:50.739353 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739364 | orchestrator | 2026-01-01 00:41:50.739375 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-01 00:41:50.739386 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.157) 0:00:40.090 ****** 2026-01-01 00:41:50.739415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '21a5f53a-dc04-53e0-afe9-de267ba79db4'}})  2026-01-01 00:41:50.739426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b87804f1-5161-5843-851c-861f025ab6ce'}})  2026-01-01 00:41:50.739437 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739448 | orchestrator | 2026-01-01 00:41:50.739459 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-01 00:41:50.739470 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.128) 0:00:40.218 ****** 2026-01-01 00:41:50.739480 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:50.739491 | orchestrator | 2026-01-01 00:41:50.739502 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-01 00:41:50.739512 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.135) 0:00:40.354 ****** 2026-01-01 00:41:50.739523 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:41:50.739534 | orchestrator | 2026-01-01 00:41:50.739544 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-01 00:41:50.739555 | orchestrator | Thursday 01 January 2026 00:41:48 +0000 (0:00:00.141) 0:00:40.495 ****** 2026-01-01 00:41:50.739566 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739577 | orchestrator | 2026-01-01 00:41:50.739587 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-01 00:41:50.739598 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.103) 0:00:40.599 ****** 2026-01-01 00:41:50.739609 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739619 | orchestrator | 2026-01-01 00:41:50.739630 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-01 00:41:50.739641 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.100) 0:00:40.699 ****** 2026-01-01 00:41:50.739651 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739662 | orchestrator | 2026-01-01 00:41:50.739673 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-01 00:41:50.739684 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.102) 0:00:40.802 ****** 2026-01-01 00:41:50.739718 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:41:50.739730 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:41:50.739740 | orchestrator |  "sdb": { 2026-01-01 00:41:50.739771 | orchestrator |  "osd_lvm_uuid": "21a5f53a-dc04-53e0-afe9-de267ba79db4" 2026-01-01 00:41:50.739782 | orchestrator |  }, 2026-01-01 00:41:50.739793 | orchestrator |  "sdc": { 2026-01-01 00:41:50.739804 | orchestrator |  "osd_lvm_uuid": "b87804f1-5161-5843-851c-861f025ab6ce" 2026-01-01 00:41:50.739815 | orchestrator |  } 2026-01-01 00:41:50.739826 | orchestrator |  } 2026-01-01 00:41:50.739837 | orchestrator | } 2026-01-01 00:41:50.739848 | orchestrator | 2026-01-01 00:41:50.739859 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-01 00:41:50.739870 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.112) 0:00:40.914 ****** 2026-01-01 00:41:50.739880 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739891 | orchestrator | 2026-01-01 00:41:50.739902 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-01 00:41:50.739913 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.245) 0:00:41.160 ****** 2026-01-01 00:41:50.739932 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739943 | orchestrator | 2026-01-01 00:41:50.739954 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-01 00:41:50.739964 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.099) 0:00:41.259 ****** 2026-01-01 00:41:50.739975 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:41:50.739986 | orchestrator | 2026-01-01 00:41:50.739997 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-01 00:41:50.740007 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.096) 0:00:41.356 ****** 2026-01-01 00:41:50.740018 | orchestrator | changed: [testbed-node-5] => { 2026-01-01 00:41:50.740029 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-01 00:41:50.740040 | orchestrator |  "ceph_osd_devices": { 2026-01-01 00:41:50.740059 | orchestrator |  "sdb": { 2026-01-01 00:41:50.740076 | orchestrator |  "osd_lvm_uuid": "21a5f53a-dc04-53e0-afe9-de267ba79db4" 2026-01-01 00:41:50.740094 | orchestrator |  }, 2026-01-01 00:41:50.740112 | orchestrator |  "sdc": { 2026-01-01 00:41:50.740130 | orchestrator |  "osd_lvm_uuid": "b87804f1-5161-5843-851c-861f025ab6ce" 2026-01-01 00:41:50.740149 | orchestrator |  } 2026-01-01 00:41:50.740168 | orchestrator |  }, 2026-01-01 00:41:50.740188 | orchestrator |  "lvm_volumes": [ 2026-01-01 00:41:50.740205 | orchestrator |  { 2026-01-01 00:41:50.740223 | orchestrator |  "data": "osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4", 2026-01-01 00:41:50.740241 | orchestrator |  "data_vg": "ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4" 2026-01-01 00:41:50.740262 | orchestrator |  }, 2026-01-01 00:41:50.740281 | orchestrator |  { 2026-01-01 00:41:50.740299 | orchestrator |  "data": "osd-block-b87804f1-5161-5843-851c-861f025ab6ce", 2026-01-01 00:41:50.740317 | orchestrator |  "data_vg": "ceph-b87804f1-5161-5843-851c-861f025ab6ce" 2026-01-01 00:41:50.740336 | orchestrator |  } 2026-01-01 00:41:50.740360 | orchestrator |  ] 2026-01-01 00:41:50.740380 | orchestrator |  } 2026-01-01 00:41:50.740399 | orchestrator | } 2026-01-01 00:41:50.740417 | orchestrator | 2026-01-01 00:41:50.740435 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-01 00:41:50.740455 | orchestrator | Thursday 01 January 2026 00:41:49 +0000 (0:00:00.157) 0:00:41.514 ****** 2026-01-01 00:41:50.740472 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-01 00:41:50.740490 | orchestrator | 2026-01-01 00:41:50.740502 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:41:50.740513 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-01 00:41:50.740527 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-01 00:41:50.740546 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-01 00:41:50.740564 | orchestrator | 2026-01-01 00:41:50.740582 | orchestrator | 2026-01-01 00:41:50.740600 | orchestrator | 2026-01-01 00:41:50.740618 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:41:50.740637 | orchestrator | Thursday 01 January 2026 00:41:50 +0000 (0:00:00.806) 0:00:42.320 ****** 2026-01-01 00:41:50.740655 | orchestrator | =============================================================================== 2026-01-01 00:41:50.740673 | orchestrator | Write configuration file ------------------------------------------------ 3.76s 2026-01-01 00:41:50.740734 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-01-01 00:41:50.740757 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.31s 2026-01-01 00:41:50.740775 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2026-01-01 00:41:50.740823 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-01-01 00:41:50.740842 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-01-01 00:41:50.740857 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-01-01 00:41:50.740868 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-01-01 00:41:50.740879 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-01-01 00:41:50.740890 | orchestrator | Print configuration data ------------------------------------------------ 0.78s 2026-01-01 00:41:50.740900 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-01-01 00:41:50.740911 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2026-01-01 00:41:50.740922 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-01 00:41:50.740945 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-01-01 00:41:51.043867 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-01-01 00:41:51.043968 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-01-01 00:41:51.043982 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-01-01 00:41:51.043994 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-01-01 00:41:51.044005 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-01-01 00:41:51.044016 | orchestrator | Print WAL devices ------------------------------------------------------- 0.50s 2026-01-01 00:42:13.454909 | orchestrator | 2026-01-01 00:42:13 | INFO  | Task eec88160-ac3f-4a95-8966-c843e468821f (sync inventory) is running in background. Output coming soon. 2026-01-01 00:42:44.179867 | orchestrator | 2026-01-01 00:42:14 | INFO  | Starting group_vars file reorganization 2026-01-01 00:42:44.179951 | orchestrator | 2026-01-01 00:42:14 | INFO  | Moved 0 file(s) to their respective directories 2026-01-01 00:42:44.179959 | orchestrator | 2026-01-01 00:42:14 | INFO  | Group_vars file reorganization completed 2026-01-01 00:42:44.179966 | orchestrator | 2026-01-01 00:42:18 | INFO  | Starting variable preparation from inventory 2026-01-01 00:42:44.179972 | orchestrator | 2026-01-01 00:42:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-01 00:42:44.179978 | orchestrator | 2026-01-01 00:42:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-01 00:42:44.179998 | orchestrator | 2026-01-01 00:42:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-01 00:42:44.180004 | orchestrator | 2026-01-01 00:42:21 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-01 00:42:44.180010 | orchestrator | 2026-01-01 00:42:21 | INFO  | Variable preparation completed 2026-01-01 00:42:44.180015 | orchestrator | 2026-01-01 00:42:23 | INFO  | Starting inventory overwrite handling 2026-01-01 00:42:44.180024 | orchestrator | 2026-01-01 00:42:23 | INFO  | Handling group overwrites in 99-overwrite 2026-01-01 00:42:44.180029 | orchestrator | 2026-01-01 00:42:23 | INFO  | Removing group frr:children from 60-generic 2026-01-01 00:42:44.180034 | orchestrator | 2026-01-01 00:42:23 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-01 00:42:44.180040 | orchestrator | 2026-01-01 00:42:23 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-01 00:42:44.180045 | orchestrator | 2026-01-01 00:42:23 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-01 00:42:44.180050 | orchestrator | 2026-01-01 00:42:23 | INFO  | Handling group overwrites in 20-roles 2026-01-01 00:42:44.180071 | orchestrator | 2026-01-01 00:42:23 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-01 00:42:44.180077 | orchestrator | 2026-01-01 00:42:23 | INFO  | Removed 5 group(s) in total 2026-01-01 00:42:44.180081 | orchestrator | 2026-01-01 00:42:23 | INFO  | Inventory overwrite handling completed 2026-01-01 00:42:44.180087 | orchestrator | 2026-01-01 00:42:24 | INFO  | Starting merge of inventory files 2026-01-01 00:42:44.180091 | orchestrator | 2026-01-01 00:42:24 | INFO  | Inventory files merged successfully 2026-01-01 00:42:44.180096 | orchestrator | 2026-01-01 00:42:30 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-01 00:42:44.180101 | orchestrator | 2026-01-01 00:42:42 | INFO  | Successfully wrote ClusterShell configuration 2026-01-01 00:42:44.180107 | orchestrator | [master 116dcd0] 2026-01-01-00-42 2026-01-01 00:42:44.180113 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-01 00:42:46.295148 | orchestrator | 2026-01-01 00:42:46 | INFO  | Task 59dc43af-8091-4b37-a4f0-713f8f44e304 (ceph-create-lvm-devices) was prepared for execution. 2026-01-01 00:42:46.295282 | orchestrator | 2026-01-01 00:42:46 | INFO  | It takes a moment until task 59dc43af-8091-4b37-a4f0-713f8f44e304 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-01 00:42:58.970636 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 00:42:58.970866 | orchestrator | 2.16.14 2026-01-01 00:42:58.970898 | orchestrator | 2026-01-01 00:42:58.970919 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-01 00:42:58.970940 | orchestrator | 2026-01-01 00:42:58.970958 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:42:58.970977 | orchestrator | Thursday 01 January 2026 00:42:50 +0000 (0:00:00.233) 0:00:00.233 ****** 2026-01-01 00:42:58.970995 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-01 00:42:58.971014 | orchestrator | 2026-01-01 00:42:58.971033 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:42:58.971050 | orchestrator | Thursday 01 January 2026 00:42:50 +0000 (0:00:00.279) 0:00:00.513 ****** 2026-01-01 00:42:58.971069 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:42:58.971088 | orchestrator | 2026-01-01 00:42:58.971109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971127 | orchestrator | Thursday 01 January 2026 00:42:51 +0000 (0:00:00.198) 0:00:00.712 ****** 2026-01-01 00:42:58.971146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:42:58.971166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:42:58.971194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:42:58.971223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:42:58.971246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:42:58.971265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:42:58.971283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:42:58.971302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:42:58.971320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-01 00:42:58.971339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:42:58.971358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:42:58.971377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:42:58.971425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:42:58.971444 | orchestrator | 2026-01-01 00:42:58.971463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971482 | orchestrator | Thursday 01 January 2026 00:42:51 +0000 (0:00:00.440) 0:00:01.152 ****** 2026-01-01 00:42:58.971500 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971518 | orchestrator | 2026-01-01 00:42:58.971536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971555 | orchestrator | Thursday 01 January 2026 00:42:51 +0000 (0:00:00.173) 0:00:01.326 ****** 2026-01-01 00:42:58.971573 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971591 | orchestrator | 2026-01-01 00:42:58.971610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971628 | orchestrator | Thursday 01 January 2026 00:42:51 +0000 (0:00:00.279) 0:00:01.606 ****** 2026-01-01 00:42:58.971646 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971664 | orchestrator | 2026-01-01 00:42:58.971682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971731 | orchestrator | Thursday 01 January 2026 00:42:52 +0000 (0:00:00.299) 0:00:01.906 ****** 2026-01-01 00:42:58.971751 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971769 | orchestrator | 2026-01-01 00:42:58.971786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971804 | orchestrator | Thursday 01 January 2026 00:42:52 +0000 (0:00:00.336) 0:00:02.243 ****** 2026-01-01 00:42:58.971822 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971841 | orchestrator | 2026-01-01 00:42:58.971858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971877 | orchestrator | Thursday 01 January 2026 00:42:52 +0000 (0:00:00.219) 0:00:02.462 ****** 2026-01-01 00:42:58.971895 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971912 | orchestrator | 2026-01-01 00:42:58.971931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.971948 | orchestrator | Thursday 01 January 2026 00:42:52 +0000 (0:00:00.188) 0:00:02.650 ****** 2026-01-01 00:42:58.971965 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.971983 | orchestrator | 2026-01-01 00:42:58.972002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.972020 | orchestrator | Thursday 01 January 2026 00:42:53 +0000 (0:00:00.218) 0:00:02.868 ****** 2026-01-01 00:42:58.972038 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.972057 | orchestrator | 2026-01-01 00:42:58.972087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.972106 | orchestrator | Thursday 01 January 2026 00:42:53 +0000 (0:00:00.192) 0:00:03.061 ****** 2026-01-01 00:42:58.972124 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91) 2026-01-01 00:42:58.972145 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91) 2026-01-01 00:42:58.972162 | orchestrator | 2026-01-01 00:42:58.972180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.972223 | orchestrator | Thursday 01 January 2026 00:42:53 +0000 (0:00:00.445) 0:00:03.507 ****** 2026-01-01 00:42:58.972243 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec) 2026-01-01 00:42:58.972262 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec) 2026-01-01 00:42:58.972280 | orchestrator | 2026-01-01 00:42:58.972298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.972316 | orchestrator | Thursday 01 January 2026 00:42:54 +0000 (0:00:00.680) 0:00:04.187 ****** 2026-01-01 00:42:58.972334 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486) 2026-01-01 00:42:58.972367 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486) 2026-01-01 00:42:58.972386 | orchestrator | 2026-01-01 00:42:58.972404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.972422 | orchestrator | Thursday 01 January 2026 00:42:55 +0000 (0:00:00.835) 0:00:05.023 ****** 2026-01-01 00:42:58.972440 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1) 2026-01-01 00:42:58.972458 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1) 2026-01-01 00:42:58.972476 | orchestrator | 2026-01-01 00:42:58.972488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:42:58.972498 | orchestrator | Thursday 01 January 2026 00:42:56 +0000 (0:00:01.129) 0:00:06.152 ****** 2026-01-01 00:42:58.972509 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:42:58.972520 | orchestrator | 2026-01-01 00:42:58.972531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.972542 | orchestrator | Thursday 01 January 2026 00:42:56 +0000 (0:00:00.417) 0:00:06.569 ****** 2026-01-01 00:42:58.972552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-01 00:42:58.972563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-01 00:42:58.972576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-01 00:42:58.972616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-01 00:42:58.972636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-01 00:42:58.972656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-01 00:42:58.972669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-01 00:42:58.972687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-01 00:42:58.972732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-01 00:42:58.972756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-01 00:42:58.972780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-01 00:42:58.972806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-01 00:42:58.972823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-01 00:42:58.972839 | orchestrator | 2026-01-01 00:42:58.972857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.972874 | orchestrator | Thursday 01 January 2026 00:42:57 +0000 (0:00:00.445) 0:00:07.014 ****** 2026-01-01 00:42:58.972891 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.972908 | orchestrator | 2026-01-01 00:42:58.972926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.972944 | orchestrator | Thursday 01 January 2026 00:42:57 +0000 (0:00:00.219) 0:00:07.234 ****** 2026-01-01 00:42:58.972960 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.972978 | orchestrator | 2026-01-01 00:42:58.972997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.973013 | orchestrator | Thursday 01 January 2026 00:42:57 +0000 (0:00:00.215) 0:00:07.450 ****** 2026-01-01 00:42:58.973029 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.973044 | orchestrator | 2026-01-01 00:42:58.973061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.973077 | orchestrator | Thursday 01 January 2026 00:42:58 +0000 (0:00:00.212) 0:00:07.662 ****** 2026-01-01 00:42:58.973109 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.973127 | orchestrator | 2026-01-01 00:42:58.973145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.973162 | orchestrator | Thursday 01 January 2026 00:42:58 +0000 (0:00:00.265) 0:00:07.928 ****** 2026-01-01 00:42:58.973180 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.973198 | orchestrator | 2026-01-01 00:42:58.973216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.973234 | orchestrator | Thursday 01 January 2026 00:42:58 +0000 (0:00:00.229) 0:00:08.157 ****** 2026-01-01 00:42:58.973252 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.973270 | orchestrator | 2026-01-01 00:42:58.973288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:42:58.973306 | orchestrator | Thursday 01 January 2026 00:42:58 +0000 (0:00:00.213) 0:00:08.370 ****** 2026-01-01 00:42:58.973324 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:42:58.973343 | orchestrator | 2026-01-01 00:42:58.973377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:08.080905 | orchestrator | Thursday 01 January 2026 00:42:58 +0000 (0:00:00.249) 0:00:08.620 ****** 2026-01-01 00:43:08.081017 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081035 | orchestrator | 2026-01-01 00:43:08.081049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:08.081061 | orchestrator | Thursday 01 January 2026 00:42:59 +0000 (0:00:00.232) 0:00:08.852 ****** 2026-01-01 00:43:08.081072 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-01 00:43:08.081084 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-01 00:43:08.081096 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-01 00:43:08.081107 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-01 00:43:08.081118 | orchestrator | 2026-01-01 00:43:08.081129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:08.081140 | orchestrator | Thursday 01 January 2026 00:43:00 +0000 (0:00:01.207) 0:00:10.060 ****** 2026-01-01 00:43:08.081151 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081162 | orchestrator | 2026-01-01 00:43:08.081173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:08.081184 | orchestrator | Thursday 01 January 2026 00:43:00 +0000 (0:00:00.267) 0:00:10.327 ****** 2026-01-01 00:43:08.081195 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081206 | orchestrator | 2026-01-01 00:43:08.081216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:08.081228 | orchestrator | Thursday 01 January 2026 00:43:00 +0000 (0:00:00.284) 0:00:10.612 ****** 2026-01-01 00:43:08.081239 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081249 | orchestrator | 2026-01-01 00:43:08.081261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:08.081272 | orchestrator | Thursday 01 January 2026 00:43:01 +0000 (0:00:00.227) 0:00:10.839 ****** 2026-01-01 00:43:08.081285 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081303 | orchestrator | 2026-01-01 00:43:08.081322 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-01 00:43:08.081340 | orchestrator | Thursday 01 January 2026 00:43:01 +0000 (0:00:00.248) 0:00:11.088 ****** 2026-01-01 00:43:08.081358 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081376 | orchestrator | 2026-01-01 00:43:08.081394 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-01 00:43:08.081413 | orchestrator | Thursday 01 January 2026 00:43:01 +0000 (0:00:00.185) 0:00:11.273 ****** 2026-01-01 00:43:08.081435 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '906f607d-f8ab-576d-9485-c345cfde3c80'}}) 2026-01-01 00:43:08.081454 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}}) 2026-01-01 00:43:08.081468 | orchestrator | 2026-01-01 00:43:08.081486 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-01 00:43:08.081539 | orchestrator | Thursday 01 January 2026 00:43:01 +0000 (0:00:00.263) 0:00:11.536 ****** 2026-01-01 00:43:08.081555 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'}) 2026-01-01 00:43:08.081570 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}) 2026-01-01 00:43:08.081584 | orchestrator | 2026-01-01 00:43:08.081598 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-01 00:43:08.081612 | orchestrator | Thursday 01 January 2026 00:43:04 +0000 (0:00:02.128) 0:00:13.664 ****** 2026-01-01 00:43:08.081626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.081640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.081657 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081677 | orchestrator | 2026-01-01 00:43:08.081697 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-01 00:43:08.081750 | orchestrator | Thursday 01 January 2026 00:43:04 +0000 (0:00:00.187) 0:00:13.851 ****** 2026-01-01 00:43:08.081764 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'}) 2026-01-01 00:43:08.081777 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}) 2026-01-01 00:43:08.081791 | orchestrator | 2026-01-01 00:43:08.081805 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-01 00:43:08.081816 | orchestrator | Thursday 01 January 2026 00:43:05 +0000 (0:00:01.476) 0:00:15.328 ****** 2026-01-01 00:43:08.081827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.081838 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.081849 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081860 | orchestrator | 2026-01-01 00:43:08.081870 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-01 00:43:08.081881 | orchestrator | Thursday 01 January 2026 00:43:05 +0000 (0:00:00.175) 0:00:15.503 ****** 2026-01-01 00:43:08.081911 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081922 | orchestrator | 2026-01-01 00:43:08.081933 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-01 00:43:08.081944 | orchestrator | Thursday 01 January 2026 00:43:05 +0000 (0:00:00.156) 0:00:15.660 ****** 2026-01-01 00:43:08.081955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.081966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.081976 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.081987 | orchestrator | 2026-01-01 00:43:08.081997 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-01 00:43:08.082008 | orchestrator | Thursday 01 January 2026 00:43:06 +0000 (0:00:00.413) 0:00:16.074 ****** 2026-01-01 00:43:08.082090 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082102 | orchestrator | 2026-01-01 00:43:08.082114 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-01 00:43:08.082159 | orchestrator | Thursday 01 January 2026 00:43:06 +0000 (0:00:00.167) 0:00:16.242 ****** 2026-01-01 00:43:08.082185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.082196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.082207 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082218 | orchestrator | 2026-01-01 00:43:08.082229 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-01 00:43:08.082240 | orchestrator | Thursday 01 January 2026 00:43:06 +0000 (0:00:00.186) 0:00:16.428 ****** 2026-01-01 00:43:08.082250 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082261 | orchestrator | 2026-01-01 00:43:08.082272 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-01 00:43:08.082282 | orchestrator | Thursday 01 January 2026 00:43:06 +0000 (0:00:00.180) 0:00:16.609 ****** 2026-01-01 00:43:08.082293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.082304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.082315 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082325 | orchestrator | 2026-01-01 00:43:08.082336 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-01 00:43:08.082347 | orchestrator | Thursday 01 January 2026 00:43:07 +0000 (0:00:00.161) 0:00:16.770 ****** 2026-01-01 00:43:08.082358 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:08.082369 | orchestrator | 2026-01-01 00:43:08.082380 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-01 00:43:08.082406 | orchestrator | Thursday 01 January 2026 00:43:07 +0000 (0:00:00.148) 0:00:16.919 ****** 2026-01-01 00:43:08.082423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.082434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.082445 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082455 | orchestrator | 2026-01-01 00:43:08.082466 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-01 00:43:08.082477 | orchestrator | Thursday 01 January 2026 00:43:07 +0000 (0:00:00.235) 0:00:17.154 ****** 2026-01-01 00:43:08.082487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.082498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.082509 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082519 | orchestrator | 2026-01-01 00:43:08.082530 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-01 00:43:08.082541 | orchestrator | Thursday 01 January 2026 00:43:07 +0000 (0:00:00.210) 0:00:17.365 ****** 2026-01-01 00:43:08.082552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:08.082563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:08.082574 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082584 | orchestrator | 2026-01-01 00:43:08.082595 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-01 00:43:08.082613 | orchestrator | Thursday 01 January 2026 00:43:07 +0000 (0:00:00.181) 0:00:17.546 ****** 2026-01-01 00:43:08.082625 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:08.082642 | orchestrator | 2026-01-01 00:43:08.082661 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-01 00:43:08.082723 | orchestrator | Thursday 01 January 2026 00:43:08 +0000 (0:00:00.178) 0:00:17.724 ****** 2026-01-01 00:43:15.166554 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166631 | orchestrator | 2026-01-01 00:43:15.166638 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-01 00:43:15.166644 | orchestrator | Thursday 01 January 2026 00:43:08 +0000 (0:00:00.146) 0:00:17.870 ****** 2026-01-01 00:43:15.166649 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166653 | orchestrator | 2026-01-01 00:43:15.166657 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-01 00:43:15.166661 | orchestrator | Thursday 01 January 2026 00:43:08 +0000 (0:00:00.159) 0:00:18.030 ****** 2026-01-01 00:43:15.166665 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:43:15.166669 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-01 00:43:15.166673 | orchestrator | } 2026-01-01 00:43:15.166678 | orchestrator | 2026-01-01 00:43:15.166682 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-01 00:43:15.166685 | orchestrator | Thursday 01 January 2026 00:43:08 +0000 (0:00:00.416) 0:00:18.446 ****** 2026-01-01 00:43:15.166689 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:43:15.166693 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-01 00:43:15.166697 | orchestrator | } 2026-01-01 00:43:15.166734 | orchestrator | 2026-01-01 00:43:15.166739 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-01 00:43:15.166743 | orchestrator | Thursday 01 January 2026 00:43:08 +0000 (0:00:00.162) 0:00:18.609 ****** 2026-01-01 00:43:15.166747 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:43:15.166751 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-01 00:43:15.166755 | orchestrator | } 2026-01-01 00:43:15.166759 | orchestrator | 2026-01-01 00:43:15.166763 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-01 00:43:15.166767 | orchestrator | Thursday 01 January 2026 00:43:09 +0000 (0:00:00.179) 0:00:18.788 ****** 2026-01-01 00:43:15.166770 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:15.166774 | orchestrator | 2026-01-01 00:43:15.166778 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-01 00:43:15.166782 | orchestrator | Thursday 01 January 2026 00:43:09 +0000 (0:00:00.737) 0:00:19.526 ****** 2026-01-01 00:43:15.166786 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:15.166790 | orchestrator | 2026-01-01 00:43:15.166794 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-01 00:43:15.166798 | orchestrator | Thursday 01 January 2026 00:43:10 +0000 (0:00:00.529) 0:00:20.055 ****** 2026-01-01 00:43:15.166801 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:15.166805 | orchestrator | 2026-01-01 00:43:15.166809 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-01 00:43:15.166813 | orchestrator | Thursday 01 January 2026 00:43:10 +0000 (0:00:00.585) 0:00:20.640 ****** 2026-01-01 00:43:15.166818 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:15.166821 | orchestrator | 2026-01-01 00:43:15.166825 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-01 00:43:15.166829 | orchestrator | Thursday 01 January 2026 00:43:11 +0000 (0:00:00.141) 0:00:20.782 ****** 2026-01-01 00:43:15.166833 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166837 | orchestrator | 2026-01-01 00:43:15.166841 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-01 00:43:15.166845 | orchestrator | Thursday 01 January 2026 00:43:11 +0000 (0:00:00.119) 0:00:20.901 ****** 2026-01-01 00:43:15.166849 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166852 | orchestrator | 2026-01-01 00:43:15.166856 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-01 00:43:15.166884 | orchestrator | Thursday 01 January 2026 00:43:11 +0000 (0:00:00.111) 0:00:21.012 ****** 2026-01-01 00:43:15.166889 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:43:15.166893 | orchestrator |  "vgs_report": { 2026-01-01 00:43:15.166897 | orchestrator |  "vg": [] 2026-01-01 00:43:15.166901 | orchestrator |  } 2026-01-01 00:43:15.166904 | orchestrator | } 2026-01-01 00:43:15.166908 | orchestrator | 2026-01-01 00:43:15.166912 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-01 00:43:15.166916 | orchestrator | Thursday 01 January 2026 00:43:11 +0000 (0:00:00.153) 0:00:21.166 ****** 2026-01-01 00:43:15.166919 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166923 | orchestrator | 2026-01-01 00:43:15.166927 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-01 00:43:15.166931 | orchestrator | Thursday 01 January 2026 00:43:11 +0000 (0:00:00.160) 0:00:21.326 ****** 2026-01-01 00:43:15.166934 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166938 | orchestrator | 2026-01-01 00:43:15.166942 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-01 00:43:15.166946 | orchestrator | Thursday 01 January 2026 00:43:11 +0000 (0:00:00.169) 0:00:21.496 ****** 2026-01-01 00:43:15.166950 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166953 | orchestrator | 2026-01-01 00:43:15.166957 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-01 00:43:15.166961 | orchestrator | Thursday 01 January 2026 00:43:12 +0000 (0:00:00.490) 0:00:21.986 ****** 2026-01-01 00:43:15.166965 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166968 | orchestrator | 2026-01-01 00:43:15.166972 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-01 00:43:15.166976 | orchestrator | Thursday 01 January 2026 00:43:12 +0000 (0:00:00.172) 0:00:22.159 ****** 2026-01-01 00:43:15.166980 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.166986 | orchestrator | 2026-01-01 00:43:15.166992 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-01 00:43:15.166998 | orchestrator | Thursday 01 January 2026 00:43:12 +0000 (0:00:00.149) 0:00:22.309 ****** 2026-01-01 00:43:15.167003 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167009 | orchestrator | 2026-01-01 00:43:15.167015 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-01 00:43:15.167021 | orchestrator | Thursday 01 January 2026 00:43:12 +0000 (0:00:00.153) 0:00:22.462 ****** 2026-01-01 00:43:15.167027 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167033 | orchestrator | 2026-01-01 00:43:15.167038 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-01 00:43:15.167044 | orchestrator | Thursday 01 January 2026 00:43:12 +0000 (0:00:00.152) 0:00:22.615 ****** 2026-01-01 00:43:15.167063 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167070 | orchestrator | 2026-01-01 00:43:15.167076 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-01 00:43:15.167082 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.153) 0:00:22.768 ****** 2026-01-01 00:43:15.167087 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167094 | orchestrator | 2026-01-01 00:43:15.167100 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-01 00:43:15.167107 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.136) 0:00:22.905 ****** 2026-01-01 00:43:15.167111 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167116 | orchestrator | 2026-01-01 00:43:15.167120 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-01 00:43:15.167124 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.155) 0:00:23.060 ****** 2026-01-01 00:43:15.167129 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167133 | orchestrator | 2026-01-01 00:43:15.167138 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-01 00:43:15.167142 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.135) 0:00:23.195 ****** 2026-01-01 00:43:15.167152 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167156 | orchestrator | 2026-01-01 00:43:15.167161 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-01 00:43:15.167165 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.140) 0:00:23.336 ****** 2026-01-01 00:43:15.167169 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167174 | orchestrator | 2026-01-01 00:43:15.167178 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-01 00:43:15.167183 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.139) 0:00:23.475 ****** 2026-01-01 00:43:15.167187 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167191 | orchestrator | 2026-01-01 00:43:15.167196 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-01 00:43:15.167200 | orchestrator | Thursday 01 January 2026 00:43:13 +0000 (0:00:00.143) 0:00:23.619 ****** 2026-01-01 00:43:15.167206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:15.167213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:15.167217 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167221 | orchestrator | 2026-01-01 00:43:15.167226 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-01 00:43:15.167231 | orchestrator | Thursday 01 January 2026 00:43:14 +0000 (0:00:00.378) 0:00:23.997 ****** 2026-01-01 00:43:15.167235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:15.167239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:15.167244 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167249 | orchestrator | 2026-01-01 00:43:15.167253 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-01 00:43:15.167257 | orchestrator | Thursday 01 January 2026 00:43:14 +0000 (0:00:00.159) 0:00:24.157 ****** 2026-01-01 00:43:15.167262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:15.167267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:15.167271 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167276 | orchestrator | 2026-01-01 00:43:15.167280 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-01 00:43:15.167285 | orchestrator | Thursday 01 January 2026 00:43:14 +0000 (0:00:00.167) 0:00:24.324 ****** 2026-01-01 00:43:15.167290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:15.167295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:15.167299 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167303 | orchestrator | 2026-01-01 00:43:15.167308 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-01 00:43:15.167312 | orchestrator | Thursday 01 January 2026 00:43:14 +0000 (0:00:00.161) 0:00:24.486 ****** 2026-01-01 00:43:15.167316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:15.167321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:15.167330 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:15.167334 | orchestrator | 2026-01-01 00:43:15.167339 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-01 00:43:15.167349 | orchestrator | Thursday 01 January 2026 00:43:14 +0000 (0:00:00.169) 0:00:24.655 ****** 2026-01-01 00:43:15.167358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:20.747266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:20.747372 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:20.747386 | orchestrator | 2026-01-01 00:43:20.747397 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-01 00:43:20.747408 | orchestrator | Thursday 01 January 2026 00:43:15 +0000 (0:00:00.168) 0:00:24.824 ****** 2026-01-01 00:43:20.747418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:20.747427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:20.747436 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:20.747445 | orchestrator | 2026-01-01 00:43:20.747454 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-01 00:43:20.747463 | orchestrator | Thursday 01 January 2026 00:43:15 +0000 (0:00:00.173) 0:00:24.998 ****** 2026-01-01 00:43:20.747472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:20.747481 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:20.747490 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:20.747499 | orchestrator | 2026-01-01 00:43:20.747507 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-01 00:43:20.747516 | orchestrator | Thursday 01 January 2026 00:43:15 +0000 (0:00:00.172) 0:00:25.170 ****** 2026-01-01 00:43:20.747525 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:20.747535 | orchestrator | 2026-01-01 00:43:20.747543 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-01 00:43:20.747552 | orchestrator | Thursday 01 January 2026 00:43:16 +0000 (0:00:00.513) 0:00:25.683 ****** 2026-01-01 00:43:20.747561 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:20.747569 | orchestrator | 2026-01-01 00:43:20.747578 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-01 00:43:20.747587 | orchestrator | Thursday 01 January 2026 00:43:16 +0000 (0:00:00.551) 0:00:26.235 ****** 2026-01-01 00:43:20.747595 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:43:20.747604 | orchestrator | 2026-01-01 00:43:20.747613 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-01 00:43:20.747621 | orchestrator | Thursday 01 January 2026 00:43:16 +0000 (0:00:00.177) 0:00:26.413 ****** 2026-01-01 00:43:20.747630 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'vg_name': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}) 2026-01-01 00:43:20.747655 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'vg_name': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'}) 2026-01-01 00:43:20.747664 | orchestrator | 2026-01-01 00:43:20.747673 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-01 00:43:20.747681 | orchestrator | Thursday 01 January 2026 00:43:16 +0000 (0:00:00.209) 0:00:26.622 ****** 2026-01-01 00:43:20.747748 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:20.747759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:20.747768 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:20.747776 | orchestrator | 2026-01-01 00:43:20.747785 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-01 00:43:20.747794 | orchestrator | Thursday 01 January 2026 00:43:17 +0000 (0:00:00.386) 0:00:27.009 ****** 2026-01-01 00:43:20.747804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:20.747814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:20.747824 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:20.747836 | orchestrator | 2026-01-01 00:43:20.747845 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-01 00:43:20.747855 | orchestrator | Thursday 01 January 2026 00:43:17 +0000 (0:00:00.166) 0:00:27.176 ****** 2026-01-01 00:43:20.747865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'})  2026-01-01 00:43:20.747875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'})  2026-01-01 00:43:20.747885 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:43:20.747895 | orchestrator | 2026-01-01 00:43:20.747905 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-01 00:43:20.747916 | orchestrator | Thursday 01 January 2026 00:43:17 +0000 (0:00:00.163) 0:00:27.339 ****** 2026-01-01 00:43:20.747941 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 00:43:20.747953 | orchestrator |  "lvm_report": { 2026-01-01 00:43:20.747963 | orchestrator |  "lv": [ 2026-01-01 00:43:20.747974 | orchestrator |  { 2026-01-01 00:43:20.747984 | orchestrator |  "lv_name": "osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99", 2026-01-01 00:43:20.747994 | orchestrator |  "vg_name": "ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99" 2026-01-01 00:43:20.748004 | orchestrator |  }, 2026-01-01 00:43:20.748014 | orchestrator |  { 2026-01-01 00:43:20.748024 | orchestrator |  "lv_name": "osd-block-906f607d-f8ab-576d-9485-c345cfde3c80", 2026-01-01 00:43:20.748034 | orchestrator |  "vg_name": "ceph-906f607d-f8ab-576d-9485-c345cfde3c80" 2026-01-01 00:43:20.748044 | orchestrator |  } 2026-01-01 00:43:20.748054 | orchestrator |  ], 2026-01-01 00:43:20.748063 | orchestrator |  "pv": [ 2026-01-01 00:43:20.748073 | orchestrator |  { 2026-01-01 00:43:20.748083 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-01 00:43:20.748093 | orchestrator |  "vg_name": "ceph-906f607d-f8ab-576d-9485-c345cfde3c80" 2026-01-01 00:43:20.748101 | orchestrator |  }, 2026-01-01 00:43:20.748110 | orchestrator |  { 2026-01-01 00:43:20.748118 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-01 00:43:20.748127 | orchestrator |  "vg_name": "ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99" 2026-01-01 00:43:20.748135 | orchestrator |  } 2026-01-01 00:43:20.748144 | orchestrator |  ] 2026-01-01 00:43:20.748152 | orchestrator |  } 2026-01-01 00:43:20.748161 | orchestrator | } 2026-01-01 00:43:20.748170 | orchestrator | 2026-01-01 00:43:20.748178 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-01 00:43:20.748187 | orchestrator | 2026-01-01 00:43:20.748196 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:43:20.748211 | orchestrator | Thursday 01 January 2026 00:43:17 +0000 (0:00:00.312) 0:00:27.652 ****** 2026-01-01 00:43:20.748220 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-01 00:43:20.748228 | orchestrator | 2026-01-01 00:43:20.748237 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:43:20.748246 | orchestrator | Thursday 01 January 2026 00:43:18 +0000 (0:00:00.241) 0:00:27.894 ****** 2026-01-01 00:43:20.748254 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:20.748263 | orchestrator | 2026-01-01 00:43:20.748271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748280 | orchestrator | Thursday 01 January 2026 00:43:18 +0000 (0:00:00.250) 0:00:28.145 ****** 2026-01-01 00:43:20.748289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:43:20.748298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:43:20.748306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:43:20.748315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:43:20.748323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:43:20.748332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:43:20.748345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:43:20.748354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:43:20.748362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-01 00:43:20.748371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:43:20.748380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:43:20.748388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:43:20.748396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:43:20.748405 | orchestrator | 2026-01-01 00:43:20.748414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748422 | orchestrator | Thursday 01 January 2026 00:43:18 +0000 (0:00:00.419) 0:00:28.565 ****** 2026-01-01 00:43:20.748431 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:20.748439 | orchestrator | 2026-01-01 00:43:20.748448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748456 | orchestrator | Thursday 01 January 2026 00:43:19 +0000 (0:00:00.231) 0:00:28.797 ****** 2026-01-01 00:43:20.748465 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:20.748473 | orchestrator | 2026-01-01 00:43:20.748482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748490 | orchestrator | Thursday 01 January 2026 00:43:19 +0000 (0:00:00.202) 0:00:28.999 ****** 2026-01-01 00:43:20.748499 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:20.748507 | orchestrator | 2026-01-01 00:43:20.748516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748525 | orchestrator | Thursday 01 January 2026 00:43:20 +0000 (0:00:00.767) 0:00:29.767 ****** 2026-01-01 00:43:20.748533 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:20.748542 | orchestrator | 2026-01-01 00:43:20.748550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748559 | orchestrator | Thursday 01 January 2026 00:43:20 +0000 (0:00:00.216) 0:00:29.983 ****** 2026-01-01 00:43:20.748567 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:20.748576 | orchestrator | 2026-01-01 00:43:20.748584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:20.748599 | orchestrator | Thursday 01 January 2026 00:43:20 +0000 (0:00:00.198) 0:00:30.182 ****** 2026-01-01 00:43:20.748608 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:20.748616 | orchestrator | 2026-01-01 00:43:20.748630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546087 | orchestrator | Thursday 01 January 2026 00:43:20 +0000 (0:00:00.220) 0:00:30.402 ****** 2026-01-01 00:43:33.546197 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.546215 | orchestrator | 2026-01-01 00:43:33.546228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546240 | orchestrator | Thursday 01 January 2026 00:43:20 +0000 (0:00:00.218) 0:00:30.620 ****** 2026-01-01 00:43:33.546251 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.546262 | orchestrator | 2026-01-01 00:43:33.546273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546285 | orchestrator | Thursday 01 January 2026 00:43:21 +0000 (0:00:00.230) 0:00:30.851 ****** 2026-01-01 00:43:33.546295 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea) 2026-01-01 00:43:33.546308 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea) 2026-01-01 00:43:33.546319 | orchestrator | 2026-01-01 00:43:33.546329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546340 | orchestrator | Thursday 01 January 2026 00:43:21 +0000 (0:00:00.491) 0:00:31.343 ****** 2026-01-01 00:43:33.546351 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0) 2026-01-01 00:43:33.546362 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0) 2026-01-01 00:43:33.546373 | orchestrator | 2026-01-01 00:43:33.546384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546394 | orchestrator | Thursday 01 January 2026 00:43:22 +0000 (0:00:00.506) 0:00:31.849 ****** 2026-01-01 00:43:33.546405 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1) 2026-01-01 00:43:33.546416 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1) 2026-01-01 00:43:33.546427 | orchestrator | 2026-01-01 00:43:33.546438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546449 | orchestrator | Thursday 01 January 2026 00:43:22 +0000 (0:00:00.468) 0:00:32.318 ****** 2026-01-01 00:43:33.546459 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e) 2026-01-01 00:43:33.546470 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e) 2026-01-01 00:43:33.546481 | orchestrator | 2026-01-01 00:43:33.546492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:33.546503 | orchestrator | Thursday 01 January 2026 00:43:23 +0000 (0:00:00.949) 0:00:33.267 ****** 2026-01-01 00:43:33.546514 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:43:33.546525 | orchestrator | 2026-01-01 00:43:33.546535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.546546 | orchestrator | Thursday 01 January 2026 00:43:24 +0000 (0:00:00.806) 0:00:34.074 ****** 2026-01-01 00:43:33.546560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-01 00:43:33.546574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-01 00:43:33.546593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-01 00:43:33.546613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-01 00:43:33.546631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-01 00:43:33.546724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-01 00:43:33.546741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-01 00:43:33.546753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-01 00:43:33.546766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-01 00:43:33.546779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-01 00:43:33.546792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-01 00:43:33.546805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-01 00:43:33.546819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-01 00:43:33.546831 | orchestrator | 2026-01-01 00:43:33.546844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.546856 | orchestrator | Thursday 01 January 2026 00:43:25 +0000 (0:00:01.058) 0:00:35.132 ****** 2026-01-01 00:43:33.546869 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.546882 | orchestrator | 2026-01-01 00:43:33.546896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.546909 | orchestrator | Thursday 01 January 2026 00:43:25 +0000 (0:00:00.215) 0:00:35.348 ****** 2026-01-01 00:43:33.546920 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.546931 | orchestrator | 2026-01-01 00:43:33.546942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.546953 | orchestrator | Thursday 01 January 2026 00:43:26 +0000 (0:00:00.376) 0:00:35.725 ****** 2026-01-01 00:43:33.546964 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.546975 | orchestrator | 2026-01-01 00:43:33.547003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547015 | orchestrator | Thursday 01 January 2026 00:43:26 +0000 (0:00:00.217) 0:00:35.942 ****** 2026-01-01 00:43:33.547026 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547036 | orchestrator | 2026-01-01 00:43:33.547048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547058 | orchestrator | Thursday 01 January 2026 00:43:26 +0000 (0:00:00.225) 0:00:36.168 ****** 2026-01-01 00:43:33.547069 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547080 | orchestrator | 2026-01-01 00:43:33.547091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547102 | orchestrator | Thursday 01 January 2026 00:43:26 +0000 (0:00:00.209) 0:00:36.377 ****** 2026-01-01 00:43:33.547112 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547123 | orchestrator | 2026-01-01 00:43:33.547134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547144 | orchestrator | Thursday 01 January 2026 00:43:26 +0000 (0:00:00.214) 0:00:36.592 ****** 2026-01-01 00:43:33.547155 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547166 | orchestrator | 2026-01-01 00:43:33.547177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547188 | orchestrator | Thursday 01 January 2026 00:43:27 +0000 (0:00:00.220) 0:00:36.812 ****** 2026-01-01 00:43:33.547199 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547209 | orchestrator | 2026-01-01 00:43:33.547220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547231 | orchestrator | Thursday 01 January 2026 00:43:27 +0000 (0:00:00.249) 0:00:37.062 ****** 2026-01-01 00:43:33.547242 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-01 00:43:33.547253 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-01 00:43:33.547264 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-01 00:43:33.547275 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-01 00:43:33.547295 | orchestrator | 2026-01-01 00:43:33.547306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547317 | orchestrator | Thursday 01 January 2026 00:43:28 +0000 (0:00:00.915) 0:00:37.978 ****** 2026-01-01 00:43:33.547328 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547338 | orchestrator | 2026-01-01 00:43:33.547349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547360 | orchestrator | Thursday 01 January 2026 00:43:28 +0000 (0:00:00.206) 0:00:38.185 ****** 2026-01-01 00:43:33.547371 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547381 | orchestrator | 2026-01-01 00:43:33.547392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547404 | orchestrator | Thursday 01 January 2026 00:43:29 +0000 (0:00:00.740) 0:00:38.925 ****** 2026-01-01 00:43:33.547414 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547425 | orchestrator | 2026-01-01 00:43:33.547436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:33.547447 | orchestrator | Thursday 01 January 2026 00:43:29 +0000 (0:00:00.253) 0:00:39.179 ****** 2026-01-01 00:43:33.547458 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547469 | orchestrator | 2026-01-01 00:43:33.547488 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-01 00:43:33.547514 | orchestrator | Thursday 01 January 2026 00:43:29 +0000 (0:00:00.213) 0:00:39.392 ****** 2026-01-01 00:43:33.547533 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547551 | orchestrator | 2026-01-01 00:43:33.547564 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-01 00:43:33.547574 | orchestrator | Thursday 01 January 2026 00:43:29 +0000 (0:00:00.170) 0:00:39.563 ****** 2026-01-01 00:43:33.547585 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4f4651f5-78d1-505d-b741-249c77d228e7'}}) 2026-01-01 00:43:33.547596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5dc050d-fe50-5167-b35b-32fd51d3d555'}}) 2026-01-01 00:43:33.547607 | orchestrator | 2026-01-01 00:43:33.547619 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-01 00:43:33.547637 | orchestrator | Thursday 01 January 2026 00:43:30 +0000 (0:00:00.282) 0:00:39.846 ****** 2026-01-01 00:43:33.547650 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'}) 2026-01-01 00:43:33.547662 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'}) 2026-01-01 00:43:33.547673 | orchestrator | 2026-01-01 00:43:33.547683 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-01 00:43:33.547694 | orchestrator | Thursday 01 January 2026 00:43:32 +0000 (0:00:01.857) 0:00:41.703 ****** 2026-01-01 00:43:33.547721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:33.547734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:33.547745 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:33.547756 | orchestrator | 2026-01-01 00:43:33.547766 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-01 00:43:33.547777 | orchestrator | Thursday 01 January 2026 00:43:32 +0000 (0:00:00.151) 0:00:41.855 ****** 2026-01-01 00:43:33.547788 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'}) 2026-01-01 00:43:33.547807 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'}) 2026-01-01 00:43:39.530941 | orchestrator | 2026-01-01 00:43:39.531059 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-01 00:43:39.531075 | orchestrator | Thursday 01 January 2026 00:43:33 +0000 (0:00:01.341) 0:00:43.196 ****** 2026-01-01 00:43:39.531088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531113 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531125 | orchestrator | 2026-01-01 00:43:39.531136 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-01 00:43:39.531147 | orchestrator | Thursday 01 January 2026 00:43:33 +0000 (0:00:00.173) 0:00:43.369 ****** 2026-01-01 00:43:39.531158 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531169 | orchestrator | 2026-01-01 00:43:39.531180 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-01 00:43:39.531190 | orchestrator | Thursday 01 January 2026 00:43:33 +0000 (0:00:00.152) 0:00:43.522 ****** 2026-01-01 00:43:39.531201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531222 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531233 | orchestrator | 2026-01-01 00:43:39.531243 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-01 00:43:39.531254 | orchestrator | Thursday 01 January 2026 00:43:34 +0000 (0:00:00.175) 0:00:43.697 ****** 2026-01-01 00:43:39.531265 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531275 | orchestrator | 2026-01-01 00:43:39.531286 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-01 00:43:39.531296 | orchestrator | Thursday 01 January 2026 00:43:34 +0000 (0:00:00.157) 0:00:43.855 ****** 2026-01-01 00:43:39.531307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531329 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531339 | orchestrator | 2026-01-01 00:43:39.531350 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-01 00:43:39.531376 | orchestrator | Thursday 01 January 2026 00:43:34 +0000 (0:00:00.412) 0:00:44.268 ****** 2026-01-01 00:43:39.531387 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531398 | orchestrator | 2026-01-01 00:43:39.531408 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-01 00:43:39.531419 | orchestrator | Thursday 01 January 2026 00:43:34 +0000 (0:00:00.154) 0:00:44.422 ****** 2026-01-01 00:43:39.531430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531451 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531463 | orchestrator | 2026-01-01 00:43:39.531476 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-01 00:43:39.531489 | orchestrator | Thursday 01 January 2026 00:43:34 +0000 (0:00:00.150) 0:00:44.573 ****** 2026-01-01 00:43:39.531501 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:39.531536 | orchestrator | 2026-01-01 00:43:39.531549 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-01 00:43:39.531562 | orchestrator | Thursday 01 January 2026 00:43:35 +0000 (0:00:00.164) 0:00:44.737 ****** 2026-01-01 00:43:39.531575 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531599 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531611 | orchestrator | 2026-01-01 00:43:39.531624 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-01 00:43:39.531636 | orchestrator | Thursday 01 January 2026 00:43:35 +0000 (0:00:00.156) 0:00:44.894 ****** 2026-01-01 00:43:39.531648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531675 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531686 | orchestrator | 2026-01-01 00:43:39.531699 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-01 00:43:39.531756 | orchestrator | Thursday 01 January 2026 00:43:35 +0000 (0:00:00.167) 0:00:45.061 ****** 2026-01-01 00:43:39.531770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:39.531782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:39.531795 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531809 | orchestrator | 2026-01-01 00:43:39.531822 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-01 00:43:39.531834 | orchestrator | Thursday 01 January 2026 00:43:35 +0000 (0:00:00.175) 0:00:45.237 ****** 2026-01-01 00:43:39.531845 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531856 | orchestrator | 2026-01-01 00:43:39.531866 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-01 00:43:39.531877 | orchestrator | Thursday 01 January 2026 00:43:35 +0000 (0:00:00.173) 0:00:45.411 ****** 2026-01-01 00:43:39.531888 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531898 | orchestrator | 2026-01-01 00:43:39.531909 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-01 00:43:39.531920 | orchestrator | Thursday 01 January 2026 00:43:35 +0000 (0:00:00.157) 0:00:45.568 ****** 2026-01-01 00:43:39.531931 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.531942 | orchestrator | 2026-01-01 00:43:39.531952 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-01 00:43:39.531963 | orchestrator | Thursday 01 January 2026 00:43:36 +0000 (0:00:00.159) 0:00:45.728 ****** 2026-01-01 00:43:39.531974 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:43:39.531984 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-01 00:43:39.531996 | orchestrator | } 2026-01-01 00:43:39.532007 | orchestrator | 2026-01-01 00:43:39.532018 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-01 00:43:39.532028 | orchestrator | Thursday 01 January 2026 00:43:36 +0000 (0:00:00.156) 0:00:45.885 ****** 2026-01-01 00:43:39.532039 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:43:39.532050 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-01 00:43:39.532061 | orchestrator | } 2026-01-01 00:43:39.532071 | orchestrator | 2026-01-01 00:43:39.532082 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-01 00:43:39.532093 | orchestrator | Thursday 01 January 2026 00:43:36 +0000 (0:00:00.140) 0:00:46.025 ****** 2026-01-01 00:43:39.532112 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:43:39.532123 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-01 00:43:39.532134 | orchestrator | } 2026-01-01 00:43:39.532144 | orchestrator | 2026-01-01 00:43:39.532155 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-01 00:43:39.532166 | orchestrator | Thursday 01 January 2026 00:43:36 +0000 (0:00:00.402) 0:00:46.428 ****** 2026-01-01 00:43:39.532186 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:39.532204 | orchestrator | 2026-01-01 00:43:39.532223 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-01 00:43:39.532250 | orchestrator | Thursday 01 January 2026 00:43:37 +0000 (0:00:00.579) 0:00:47.008 ****** 2026-01-01 00:43:39.532270 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:39.532288 | orchestrator | 2026-01-01 00:43:39.532306 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-01 00:43:39.532325 | orchestrator | Thursday 01 January 2026 00:43:37 +0000 (0:00:00.524) 0:00:47.533 ****** 2026-01-01 00:43:39.532342 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:39.532361 | orchestrator | 2026-01-01 00:43:39.532379 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-01 00:43:39.532399 | orchestrator | Thursday 01 January 2026 00:43:38 +0000 (0:00:00.512) 0:00:48.045 ****** 2026-01-01 00:43:39.532418 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:39.532436 | orchestrator | 2026-01-01 00:43:39.532455 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-01 00:43:39.532473 | orchestrator | Thursday 01 January 2026 00:43:38 +0000 (0:00:00.166) 0:00:48.212 ****** 2026-01-01 00:43:39.532484 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.532495 | orchestrator | 2026-01-01 00:43:39.532515 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-01 00:43:39.532527 | orchestrator | Thursday 01 January 2026 00:43:38 +0000 (0:00:00.112) 0:00:48.325 ****** 2026-01-01 00:43:39.532537 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.532548 | orchestrator | 2026-01-01 00:43:39.532559 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-01 00:43:39.532570 | orchestrator | Thursday 01 January 2026 00:43:38 +0000 (0:00:00.123) 0:00:48.448 ****** 2026-01-01 00:43:39.532580 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:43:39.532591 | orchestrator |  "vgs_report": { 2026-01-01 00:43:39.532601 | orchestrator |  "vg": [] 2026-01-01 00:43:39.532612 | orchestrator |  } 2026-01-01 00:43:39.532623 | orchestrator | } 2026-01-01 00:43:39.532633 | orchestrator | 2026-01-01 00:43:39.532644 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-01 00:43:39.532655 | orchestrator | Thursday 01 January 2026 00:43:38 +0000 (0:00:00.155) 0:00:48.604 ****** 2026-01-01 00:43:39.532665 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.532676 | orchestrator | 2026-01-01 00:43:39.532686 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-01 00:43:39.532697 | orchestrator | Thursday 01 January 2026 00:43:39 +0000 (0:00:00.150) 0:00:48.754 ****** 2026-01-01 00:43:39.532733 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.532744 | orchestrator | 2026-01-01 00:43:39.532755 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-01 00:43:39.532766 | orchestrator | Thursday 01 January 2026 00:43:39 +0000 (0:00:00.150) 0:00:48.904 ****** 2026-01-01 00:43:39.532777 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.532787 | orchestrator | 2026-01-01 00:43:39.532798 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-01 00:43:39.532808 | orchestrator | Thursday 01 January 2026 00:43:39 +0000 (0:00:00.140) 0:00:49.045 ****** 2026-01-01 00:43:39.532819 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:39.532830 | orchestrator | 2026-01-01 00:43:39.532851 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-01 00:43:44.685103 | orchestrator | Thursday 01 January 2026 00:43:39 +0000 (0:00:00.139) 0:00:49.185 ****** 2026-01-01 00:43:44.685209 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685220 | orchestrator | 2026-01-01 00:43:44.685227 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-01 00:43:44.685234 | orchestrator | Thursday 01 January 2026 00:43:39 +0000 (0:00:00.375) 0:00:49.561 ****** 2026-01-01 00:43:44.685239 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685245 | orchestrator | 2026-01-01 00:43:44.685251 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-01 00:43:44.685257 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.158) 0:00:49.720 ****** 2026-01-01 00:43:44.685263 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685268 | orchestrator | 2026-01-01 00:43:44.685274 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-01 00:43:44.685280 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.140) 0:00:49.860 ****** 2026-01-01 00:43:44.685286 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685291 | orchestrator | 2026-01-01 00:43:44.685297 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-01 00:43:44.685303 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.151) 0:00:50.012 ****** 2026-01-01 00:43:44.685308 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685314 | orchestrator | 2026-01-01 00:43:44.685320 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-01 00:43:44.685325 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.150) 0:00:50.162 ****** 2026-01-01 00:43:44.685331 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685337 | orchestrator | 2026-01-01 00:43:44.685342 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-01 00:43:44.685348 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.149) 0:00:50.311 ****** 2026-01-01 00:43:44.685354 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685359 | orchestrator | 2026-01-01 00:43:44.685365 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-01 00:43:44.685371 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.157) 0:00:50.469 ****** 2026-01-01 00:43:44.685377 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685382 | orchestrator | 2026-01-01 00:43:44.685388 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-01 00:43:44.685394 | orchestrator | Thursday 01 January 2026 00:43:40 +0000 (0:00:00.180) 0:00:50.649 ****** 2026-01-01 00:43:44.685399 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685405 | orchestrator | 2026-01-01 00:43:44.685411 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-01 00:43:44.685417 | orchestrator | Thursday 01 January 2026 00:43:41 +0000 (0:00:00.140) 0:00:50.790 ****** 2026-01-01 00:43:44.685422 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685428 | orchestrator | 2026-01-01 00:43:44.685434 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-01 00:43:44.685450 | orchestrator | Thursday 01 January 2026 00:43:41 +0000 (0:00:00.161) 0:00:50.951 ****** 2026-01-01 00:43:44.685457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685470 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685476 | orchestrator | 2026-01-01 00:43:44.685482 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-01 00:43:44.685488 | orchestrator | Thursday 01 January 2026 00:43:41 +0000 (0:00:00.163) 0:00:51.115 ****** 2026-01-01 00:43:44.685493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685504 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685509 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685515 | orchestrator | 2026-01-01 00:43:44.685521 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-01 00:43:44.685527 | orchestrator | Thursday 01 January 2026 00:43:41 +0000 (0:00:00.163) 0:00:51.278 ****** 2026-01-01 00:43:44.685532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685538 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685544 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685550 | orchestrator | 2026-01-01 00:43:44.685555 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-01 00:43:44.685561 | orchestrator | Thursday 01 January 2026 00:43:42 +0000 (0:00:00.411) 0:00:51.689 ****** 2026-01-01 00:43:44.685567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685578 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685584 | orchestrator | 2026-01-01 00:43:44.685601 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-01 00:43:44.685608 | orchestrator | Thursday 01 January 2026 00:43:42 +0000 (0:00:00.167) 0:00:51.857 ****** 2026-01-01 00:43:44.685613 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685625 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685631 | orchestrator | 2026-01-01 00:43:44.685637 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-01 00:43:44.685643 | orchestrator | Thursday 01 January 2026 00:43:42 +0000 (0:00:00.209) 0:00:52.067 ****** 2026-01-01 00:43:44.685649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685656 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685662 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685669 | orchestrator | 2026-01-01 00:43:44.685675 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-01 00:43:44.685682 | orchestrator | Thursday 01 January 2026 00:43:42 +0000 (0:00:00.169) 0:00:52.236 ****** 2026-01-01 00:43:44.685689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685695 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685721 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685728 | orchestrator | 2026-01-01 00:43:44.685735 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-01 00:43:44.685741 | orchestrator | Thursday 01 January 2026 00:43:42 +0000 (0:00:00.146) 0:00:52.382 ****** 2026-01-01 00:43:44.685753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685771 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685777 | orchestrator | 2026-01-01 00:43:44.685783 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-01 00:43:44.685789 | orchestrator | Thursday 01 January 2026 00:43:42 +0000 (0:00:00.151) 0:00:52.534 ****** 2026-01-01 00:43:44.685795 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:44.685801 | orchestrator | 2026-01-01 00:43:44.685806 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-01 00:43:44.685812 | orchestrator | Thursday 01 January 2026 00:43:43 +0000 (0:00:00.539) 0:00:53.074 ****** 2026-01-01 00:43:44.685818 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:44.685824 | orchestrator | 2026-01-01 00:43:44.685829 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-01 00:43:44.685835 | orchestrator | Thursday 01 January 2026 00:43:43 +0000 (0:00:00.553) 0:00:53.627 ****** 2026-01-01 00:43:44.685841 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:43:44.685847 | orchestrator | 2026-01-01 00:43:44.685853 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-01 00:43:44.685858 | orchestrator | Thursday 01 January 2026 00:43:44 +0000 (0:00:00.164) 0:00:53.792 ****** 2026-01-01 00:43:44.685864 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'vg_name': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'}) 2026-01-01 00:43:44.685871 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'vg_name': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'}) 2026-01-01 00:43:44.685877 | orchestrator | 2026-01-01 00:43:44.685883 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-01 00:43:44.685889 | orchestrator | Thursday 01 January 2026 00:43:44 +0000 (0:00:00.189) 0:00:53.982 ****** 2026-01-01 00:43:44.685895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:44.685906 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:44.685912 | orchestrator | 2026-01-01 00:43:44.685918 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-01 00:43:44.685924 | orchestrator | Thursday 01 January 2026 00:43:44 +0000 (0:00:00.173) 0:00:54.155 ****** 2026-01-01 00:43:44.685929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:44.685939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:50.963690 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:50.963815 | orchestrator | 2026-01-01 00:43:50.963827 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-01 00:43:50.963836 | orchestrator | Thursday 01 January 2026 00:43:44 +0000 (0:00:00.184) 0:00:54.340 ****** 2026-01-01 00:43:50.963844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'})  2026-01-01 00:43:50.963852 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'})  2026-01-01 00:43:50.963860 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:43:50.963886 | orchestrator | 2026-01-01 00:43:50.963894 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-01 00:43:50.963901 | orchestrator | Thursday 01 January 2026 00:43:44 +0000 (0:00:00.208) 0:00:54.548 ****** 2026-01-01 00:43:50.963908 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 00:43:50.963915 | orchestrator |  "lvm_report": { 2026-01-01 00:43:50.963923 | orchestrator |  "lv": [ 2026-01-01 00:43:50.963930 | orchestrator |  { 2026-01-01 00:43:50.963936 | orchestrator |  "lv_name": "osd-block-4f4651f5-78d1-505d-b741-249c77d228e7", 2026-01-01 00:43:50.963944 | orchestrator |  "vg_name": "ceph-4f4651f5-78d1-505d-b741-249c77d228e7" 2026-01-01 00:43:50.963951 | orchestrator |  }, 2026-01-01 00:43:50.963957 | orchestrator |  { 2026-01-01 00:43:50.963964 | orchestrator |  "lv_name": "osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555", 2026-01-01 00:43:50.963970 | orchestrator |  "vg_name": "ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555" 2026-01-01 00:43:50.963977 | orchestrator |  } 2026-01-01 00:43:50.963984 | orchestrator |  ], 2026-01-01 00:43:50.963990 | orchestrator |  "pv": [ 2026-01-01 00:43:50.963997 | orchestrator |  { 2026-01-01 00:43:50.964003 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-01 00:43:50.964010 | orchestrator |  "vg_name": "ceph-4f4651f5-78d1-505d-b741-249c77d228e7" 2026-01-01 00:43:50.964017 | orchestrator |  }, 2026-01-01 00:43:50.964023 | orchestrator |  { 2026-01-01 00:43:50.964030 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-01 00:43:50.964036 | orchestrator |  "vg_name": "ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555" 2026-01-01 00:43:50.964043 | orchestrator |  } 2026-01-01 00:43:50.964049 | orchestrator |  ] 2026-01-01 00:43:50.964056 | orchestrator |  } 2026-01-01 00:43:50.964062 | orchestrator | } 2026-01-01 00:43:50.964070 | orchestrator | 2026-01-01 00:43:50.964076 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-01 00:43:50.964083 | orchestrator | 2026-01-01 00:43:50.964089 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-01 00:43:50.964097 | orchestrator | Thursday 01 January 2026 00:43:45 +0000 (0:00:00.560) 0:00:55.109 ****** 2026-01-01 00:43:50.964103 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-01 00:43:50.964110 | orchestrator | 2026-01-01 00:43:50.964117 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-01 00:43:50.964124 | orchestrator | Thursday 01 January 2026 00:43:45 +0000 (0:00:00.235) 0:00:55.344 ****** 2026-01-01 00:43:50.964131 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:43:50.964137 | orchestrator | 2026-01-01 00:43:50.964144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964151 | orchestrator | Thursday 01 January 2026 00:43:45 +0000 (0:00:00.211) 0:00:55.556 ****** 2026-01-01 00:43:50.964157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:43:50.964164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:43:50.964171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:43:50.964177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:43:50.964184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:43:50.964190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:43:50.964197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:43:50.964203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:43:50.964210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-01 00:43:50.964221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:43:50.964228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:43:50.964235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:43:50.964244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:43:50.964256 | orchestrator | 2026-01-01 00:43:50.964271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964282 | orchestrator | Thursday 01 January 2026 00:43:46 +0000 (0:00:00.406) 0:00:55.963 ****** 2026-01-01 00:43:50.964293 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964304 | orchestrator | 2026-01-01 00:43:50.964314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964324 | orchestrator | Thursday 01 January 2026 00:43:46 +0000 (0:00:00.175) 0:00:56.138 ****** 2026-01-01 00:43:50.964335 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964345 | orchestrator | 2026-01-01 00:43:50.964357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964383 | orchestrator | Thursday 01 January 2026 00:43:46 +0000 (0:00:00.185) 0:00:56.323 ****** 2026-01-01 00:43:50.964394 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964404 | orchestrator | 2026-01-01 00:43:50.964413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964422 | orchestrator | Thursday 01 January 2026 00:43:46 +0000 (0:00:00.177) 0:00:56.500 ****** 2026-01-01 00:43:50.964433 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964444 | orchestrator | 2026-01-01 00:43:50.964453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964510 | orchestrator | Thursday 01 January 2026 00:43:47 +0000 (0:00:00.198) 0:00:56.698 ****** 2026-01-01 00:43:50.964523 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964532 | orchestrator | 2026-01-01 00:43:50.964542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964552 | orchestrator | Thursday 01 January 2026 00:43:47 +0000 (0:00:00.488) 0:00:57.187 ****** 2026-01-01 00:43:50.964562 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964571 | orchestrator | 2026-01-01 00:43:50.964583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964592 | orchestrator | Thursday 01 January 2026 00:43:47 +0000 (0:00:00.180) 0:00:57.367 ****** 2026-01-01 00:43:50.964602 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964612 | orchestrator | 2026-01-01 00:43:50.964622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964633 | orchestrator | Thursday 01 January 2026 00:43:47 +0000 (0:00:00.184) 0:00:57.552 ****** 2026-01-01 00:43:50.964644 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:43:50.964654 | orchestrator | 2026-01-01 00:43:50.964664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964673 | orchestrator | Thursday 01 January 2026 00:43:48 +0000 (0:00:00.238) 0:00:57.791 ****** 2026-01-01 00:43:50.964684 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44) 2026-01-01 00:43:50.964696 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44) 2026-01-01 00:43:50.964779 | orchestrator | 2026-01-01 00:43:50.964790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964799 | orchestrator | Thursday 01 January 2026 00:43:48 +0000 (0:00:00.441) 0:00:58.232 ****** 2026-01-01 00:43:50.964809 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d) 2026-01-01 00:43:50.964819 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d) 2026-01-01 00:43:50.964830 | orchestrator | 2026-01-01 00:43:50.964850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964866 | orchestrator | Thursday 01 January 2026 00:43:49 +0000 (0:00:00.475) 0:00:58.707 ****** 2026-01-01 00:43:50.964876 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0) 2026-01-01 00:43:50.964886 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0) 2026-01-01 00:43:50.964896 | orchestrator | 2026-01-01 00:43:50.964906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964916 | orchestrator | Thursday 01 January 2026 00:43:49 +0000 (0:00:00.485) 0:00:59.193 ****** 2026-01-01 00:43:50.964926 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb) 2026-01-01 00:43:50.964936 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb) 2026-01-01 00:43:50.964947 | orchestrator | 2026-01-01 00:43:50.964958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-01 00:43:50.964969 | orchestrator | Thursday 01 January 2026 00:43:49 +0000 (0:00:00.449) 0:00:59.643 ****** 2026-01-01 00:43:50.964979 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-01 00:43:50.964990 | orchestrator | 2026-01-01 00:43:50.965001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:43:50.965010 | orchestrator | Thursday 01 January 2026 00:43:50 +0000 (0:00:00.535) 0:01:00.178 ****** 2026-01-01 00:43:50.965020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-01 00:43:50.965032 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-01 00:43:50.965042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-01 00:43:50.965053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-01 00:43:50.965064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-01 00:43:50.965075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-01 00:43:50.965086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-01 00:43:50.965096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-01 00:43:50.965107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-01 00:43:50.965118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-01 00:43:50.965128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-01 00:43:50.965151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-01 00:44:00.398265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-01 00:44:00.398371 | orchestrator | 2026-01-01 00:44:00.398385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398396 | orchestrator | Thursday 01 January 2026 00:43:50 +0000 (0:00:00.435) 0:01:00.614 ****** 2026-01-01 00:44:00.398405 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398415 | orchestrator | 2026-01-01 00:44:00.398424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398433 | orchestrator | Thursday 01 January 2026 00:43:51 +0000 (0:00:00.224) 0:01:00.838 ****** 2026-01-01 00:44:00.398441 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398450 | orchestrator | 2026-01-01 00:44:00.398459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398467 | orchestrator | Thursday 01 January 2026 00:43:51 +0000 (0:00:00.734) 0:01:01.573 ****** 2026-01-01 00:44:00.398497 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398506 | orchestrator | 2026-01-01 00:44:00.398515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398524 | orchestrator | Thursday 01 January 2026 00:43:52 +0000 (0:00:00.277) 0:01:01.851 ****** 2026-01-01 00:44:00.398532 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398541 | orchestrator | 2026-01-01 00:44:00.398549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398558 | orchestrator | Thursday 01 January 2026 00:43:52 +0000 (0:00:00.207) 0:01:02.058 ****** 2026-01-01 00:44:00.398566 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398575 | orchestrator | 2026-01-01 00:44:00.398583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398592 | orchestrator | Thursday 01 January 2026 00:43:52 +0000 (0:00:00.283) 0:01:02.342 ****** 2026-01-01 00:44:00.398600 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398609 | orchestrator | 2026-01-01 00:44:00.398617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398626 | orchestrator | Thursday 01 January 2026 00:43:52 +0000 (0:00:00.232) 0:01:02.574 ****** 2026-01-01 00:44:00.398634 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398643 | orchestrator | 2026-01-01 00:44:00.398652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398660 | orchestrator | Thursday 01 January 2026 00:43:53 +0000 (0:00:00.216) 0:01:02.790 ****** 2026-01-01 00:44:00.398669 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398677 | orchestrator | 2026-01-01 00:44:00.398686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398694 | orchestrator | Thursday 01 January 2026 00:43:53 +0000 (0:00:00.191) 0:01:02.982 ****** 2026-01-01 00:44:00.398742 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-01 00:44:00.398753 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-01 00:44:00.398763 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-01 00:44:00.398771 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-01 00:44:00.398780 | orchestrator | 2026-01-01 00:44:00.398788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398797 | orchestrator | Thursday 01 January 2026 00:43:54 +0000 (0:00:00.689) 0:01:03.671 ****** 2026-01-01 00:44:00.398806 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398816 | orchestrator | 2026-01-01 00:44:00.398827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398838 | orchestrator | Thursday 01 January 2026 00:43:54 +0000 (0:00:00.256) 0:01:03.928 ****** 2026-01-01 00:44:00.398848 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398858 | orchestrator | 2026-01-01 00:44:00.398868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398883 | orchestrator | Thursday 01 January 2026 00:43:54 +0000 (0:00:00.243) 0:01:04.172 ****** 2026-01-01 00:44:00.398897 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398911 | orchestrator | 2026-01-01 00:44:00.398925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-01 00:44:00.398940 | orchestrator | Thursday 01 January 2026 00:43:54 +0000 (0:00:00.185) 0:01:04.357 ****** 2026-01-01 00:44:00.398953 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.398967 | orchestrator | 2026-01-01 00:44:00.398980 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-01 00:44:00.398992 | orchestrator | Thursday 01 January 2026 00:43:54 +0000 (0:00:00.191) 0:01:04.548 ****** 2026-01-01 00:44:00.399005 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399020 | orchestrator | 2026-01-01 00:44:00.399033 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-01 00:44:00.399049 | orchestrator | Thursday 01 January 2026 00:43:55 +0000 (0:00:00.279) 0:01:04.828 ****** 2026-01-01 00:44:00.399064 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '21a5f53a-dc04-53e0-afe9-de267ba79db4'}}) 2026-01-01 00:44:00.399091 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b87804f1-5161-5843-851c-861f025ab6ce'}}) 2026-01-01 00:44:00.399106 | orchestrator | 2026-01-01 00:44:00.399121 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-01 00:44:00.399136 | orchestrator | Thursday 01 January 2026 00:43:55 +0000 (0:00:00.185) 0:01:05.014 ****** 2026-01-01 00:44:00.399148 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'}) 2026-01-01 00:44:00.399158 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'}) 2026-01-01 00:44:00.399167 | orchestrator | 2026-01-01 00:44:00.399175 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-01 00:44:00.399199 | orchestrator | Thursday 01 January 2026 00:43:57 +0000 (0:00:01.864) 0:01:06.878 ****** 2026-01-01 00:44:00.399209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:00.399220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:00.399231 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399242 | orchestrator | 2026-01-01 00:44:00.399253 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-01 00:44:00.399265 | orchestrator | Thursday 01 January 2026 00:43:57 +0000 (0:00:00.155) 0:01:07.033 ****** 2026-01-01 00:44:00.399275 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'}) 2026-01-01 00:44:00.399286 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'}) 2026-01-01 00:44:00.399297 | orchestrator | 2026-01-01 00:44:00.399308 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-01 00:44:00.399319 | orchestrator | Thursday 01 January 2026 00:43:58 +0000 (0:00:01.342) 0:01:08.376 ****** 2026-01-01 00:44:00.399330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:00.399341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:00.399352 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399362 | orchestrator | 2026-01-01 00:44:00.399373 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-01 00:44:00.399384 | orchestrator | Thursday 01 January 2026 00:43:58 +0000 (0:00:00.215) 0:01:08.591 ****** 2026-01-01 00:44:00.399395 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399405 | orchestrator | 2026-01-01 00:44:00.399416 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-01 00:44:00.399427 | orchestrator | Thursday 01 January 2026 00:43:59 +0000 (0:00:00.138) 0:01:08.729 ****** 2026-01-01 00:44:00.399445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:00.399457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:00.399468 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399478 | orchestrator | 2026-01-01 00:44:00.399489 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-01 00:44:00.399508 | orchestrator | Thursday 01 January 2026 00:43:59 +0000 (0:00:00.166) 0:01:08.896 ****** 2026-01-01 00:44:00.399519 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399530 | orchestrator | 2026-01-01 00:44:00.399540 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-01 00:44:00.399551 | orchestrator | Thursday 01 January 2026 00:43:59 +0000 (0:00:00.154) 0:01:09.051 ****** 2026-01-01 00:44:00.399562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:00.399573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:00.399584 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399594 | orchestrator | 2026-01-01 00:44:00.399605 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-01 00:44:00.399616 | orchestrator | Thursday 01 January 2026 00:43:59 +0000 (0:00:00.141) 0:01:09.192 ****** 2026-01-01 00:44:00.399627 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399638 | orchestrator | 2026-01-01 00:44:00.399648 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-01 00:44:00.399659 | orchestrator | Thursday 01 January 2026 00:43:59 +0000 (0:00:00.146) 0:01:09.338 ****** 2026-01-01 00:44:00.399670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:00.399681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:00.399692 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:00.399734 | orchestrator | 2026-01-01 00:44:00.399756 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-01 00:44:00.399775 | orchestrator | Thursday 01 January 2026 00:43:59 +0000 (0:00:00.152) 0:01:09.491 ****** 2026-01-01 00:44:00.399793 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:00.399811 | orchestrator | 2026-01-01 00:44:00.399828 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-01 00:44:00.399846 | orchestrator | Thursday 01 January 2026 00:44:00 +0000 (0:00:00.402) 0:01:09.894 ****** 2026-01-01 00:44:00.399874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:06.324099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:06.324190 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324200 | orchestrator | 2026-01-01 00:44:06.324208 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-01 00:44:06.324216 | orchestrator | Thursday 01 January 2026 00:44:00 +0000 (0:00:00.161) 0:01:10.055 ****** 2026-01-01 00:44:06.324224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:06.324231 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:06.324237 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324244 | orchestrator | 2026-01-01 00:44:06.324250 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-01 00:44:06.324257 | orchestrator | Thursday 01 January 2026 00:44:00 +0000 (0:00:00.164) 0:01:10.219 ****** 2026-01-01 00:44:06.324263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:06.324270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:06.324294 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324301 | orchestrator | 2026-01-01 00:44:06.324307 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-01 00:44:06.324314 | orchestrator | Thursday 01 January 2026 00:44:00 +0000 (0:00:00.153) 0:01:10.373 ****** 2026-01-01 00:44:06.324320 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324326 | orchestrator | 2026-01-01 00:44:06.324332 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-01 00:44:06.324339 | orchestrator | Thursday 01 January 2026 00:44:00 +0000 (0:00:00.141) 0:01:10.514 ****** 2026-01-01 00:44:06.324345 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324351 | orchestrator | 2026-01-01 00:44:06.324357 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-01 00:44:06.324363 | orchestrator | Thursday 01 January 2026 00:44:00 +0000 (0:00:00.143) 0:01:10.658 ****** 2026-01-01 00:44:06.324370 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324376 | orchestrator | 2026-01-01 00:44:06.324382 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-01 00:44:06.324389 | orchestrator | Thursday 01 January 2026 00:44:01 +0000 (0:00:00.154) 0:01:10.813 ****** 2026-01-01 00:44:06.324395 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:44:06.324402 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-01 00:44:06.324408 | orchestrator | } 2026-01-01 00:44:06.324415 | orchestrator | 2026-01-01 00:44:06.324421 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-01 00:44:06.324428 | orchestrator | Thursday 01 January 2026 00:44:01 +0000 (0:00:00.167) 0:01:10.980 ****** 2026-01-01 00:44:06.324434 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:44:06.324440 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-01 00:44:06.324447 | orchestrator | } 2026-01-01 00:44:06.324453 | orchestrator | 2026-01-01 00:44:06.324459 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-01 00:44:06.324466 | orchestrator | Thursday 01 January 2026 00:44:01 +0000 (0:00:00.158) 0:01:11.138 ****** 2026-01-01 00:44:06.324472 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:44:06.324478 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-01 00:44:06.324484 | orchestrator | } 2026-01-01 00:44:06.324491 | orchestrator | 2026-01-01 00:44:06.324497 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-01 00:44:06.324503 | orchestrator | Thursday 01 January 2026 00:44:01 +0000 (0:00:00.147) 0:01:11.286 ****** 2026-01-01 00:44:06.324509 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:06.324516 | orchestrator | 2026-01-01 00:44:06.324522 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-01 00:44:06.324528 | orchestrator | Thursday 01 January 2026 00:44:02 +0000 (0:00:00.516) 0:01:11.803 ****** 2026-01-01 00:44:06.324534 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:06.324540 | orchestrator | 2026-01-01 00:44:06.324547 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-01 00:44:06.324553 | orchestrator | Thursday 01 January 2026 00:44:02 +0000 (0:00:00.515) 0:01:12.318 ****** 2026-01-01 00:44:06.324559 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:06.324565 | orchestrator | 2026-01-01 00:44:06.324571 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-01 00:44:06.324578 | orchestrator | Thursday 01 January 2026 00:44:03 +0000 (0:00:00.691) 0:01:13.010 ****** 2026-01-01 00:44:06.324584 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:06.324590 | orchestrator | 2026-01-01 00:44:06.324596 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-01 00:44:06.324602 | orchestrator | Thursday 01 January 2026 00:44:03 +0000 (0:00:00.143) 0:01:13.153 ****** 2026-01-01 00:44:06.324608 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324614 | orchestrator | 2026-01-01 00:44:06.324621 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-01 00:44:06.324632 | orchestrator | Thursday 01 January 2026 00:44:03 +0000 (0:00:00.116) 0:01:13.270 ****** 2026-01-01 00:44:06.324638 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324645 | orchestrator | 2026-01-01 00:44:06.324653 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-01 00:44:06.324674 | orchestrator | Thursday 01 January 2026 00:44:03 +0000 (0:00:00.112) 0:01:13.383 ****** 2026-01-01 00:44:06.324682 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:44:06.324690 | orchestrator |  "vgs_report": { 2026-01-01 00:44:06.324697 | orchestrator |  "vg": [] 2026-01-01 00:44:06.324740 | orchestrator |  } 2026-01-01 00:44:06.324749 | orchestrator | } 2026-01-01 00:44:06.324756 | orchestrator | 2026-01-01 00:44:06.324763 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-01 00:44:06.324771 | orchestrator | Thursday 01 January 2026 00:44:03 +0000 (0:00:00.139) 0:01:13.522 ****** 2026-01-01 00:44:06.324778 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324786 | orchestrator | 2026-01-01 00:44:06.324793 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-01 00:44:06.324800 | orchestrator | Thursday 01 January 2026 00:44:03 +0000 (0:00:00.128) 0:01:13.650 ****** 2026-01-01 00:44:06.324808 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324815 | orchestrator | 2026-01-01 00:44:06.324823 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-01 00:44:06.324830 | orchestrator | Thursday 01 January 2026 00:44:04 +0000 (0:00:00.143) 0:01:13.793 ****** 2026-01-01 00:44:06.324838 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324845 | orchestrator | 2026-01-01 00:44:06.324852 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-01 00:44:06.324860 | orchestrator | Thursday 01 January 2026 00:44:04 +0000 (0:00:00.128) 0:01:13.922 ****** 2026-01-01 00:44:06.324867 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324875 | orchestrator | 2026-01-01 00:44:06.324882 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-01 00:44:06.324889 | orchestrator | Thursday 01 January 2026 00:44:04 +0000 (0:00:00.148) 0:01:14.071 ****** 2026-01-01 00:44:06.324897 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324904 | orchestrator | 2026-01-01 00:44:06.324912 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-01 00:44:06.324919 | orchestrator | Thursday 01 January 2026 00:44:04 +0000 (0:00:00.130) 0:01:14.201 ****** 2026-01-01 00:44:06.324926 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324932 | orchestrator | 2026-01-01 00:44:06.324938 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-01 00:44:06.324944 | orchestrator | Thursday 01 January 2026 00:44:04 +0000 (0:00:00.118) 0:01:14.320 ****** 2026-01-01 00:44:06.324951 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324957 | orchestrator | 2026-01-01 00:44:06.324963 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-01 00:44:06.324969 | orchestrator | Thursday 01 January 2026 00:44:04 +0000 (0:00:00.125) 0:01:14.445 ****** 2026-01-01 00:44:06.324975 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.324981 | orchestrator | 2026-01-01 00:44:06.324988 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-01 00:44:06.324994 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.304) 0:01:14.750 ****** 2026-01-01 00:44:06.325000 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325006 | orchestrator | 2026-01-01 00:44:06.325015 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-01 00:44:06.325022 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.114) 0:01:14.864 ****** 2026-01-01 00:44:06.325028 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325034 | orchestrator | 2026-01-01 00:44:06.325040 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-01 00:44:06.325052 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.130) 0:01:14.994 ****** 2026-01-01 00:44:06.325058 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325064 | orchestrator | 2026-01-01 00:44:06.325070 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-01 00:44:06.325077 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.131) 0:01:15.125 ****** 2026-01-01 00:44:06.325083 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325089 | orchestrator | 2026-01-01 00:44:06.325095 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-01 00:44:06.325101 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.133) 0:01:15.259 ****** 2026-01-01 00:44:06.325107 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325113 | orchestrator | 2026-01-01 00:44:06.325119 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-01 00:44:06.325126 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.137) 0:01:15.396 ****** 2026-01-01 00:44:06.325132 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325138 | orchestrator | 2026-01-01 00:44:06.325144 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-01 00:44:06.325150 | orchestrator | Thursday 01 January 2026 00:44:05 +0000 (0:00:00.132) 0:01:15.528 ****** 2026-01-01 00:44:06.325156 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:06.325163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:06.325169 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325175 | orchestrator | 2026-01-01 00:44:06.325181 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-01 00:44:06.325187 | orchestrator | Thursday 01 January 2026 00:44:06 +0000 (0:00:00.158) 0:01:15.687 ****** 2026-01-01 00:44:06.325193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:06.325200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:06.325206 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:06.325212 | orchestrator | 2026-01-01 00:44:06.325218 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-01 00:44:06.325224 | orchestrator | Thursday 01 January 2026 00:44:06 +0000 (0:00:00.145) 0:01:15.833 ****** 2026-01-01 00:44:06.325235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.186490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.186673 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.187600 | orchestrator | 2026-01-01 00:44:09.187627 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-01 00:44:09.187640 | orchestrator | Thursday 01 January 2026 00:44:06 +0000 (0:00:00.146) 0:01:15.980 ****** 2026-01-01 00:44:09.187652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.187664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.187675 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.187686 | orchestrator | 2026-01-01 00:44:09.187697 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-01 00:44:09.187758 | orchestrator | Thursday 01 January 2026 00:44:06 +0000 (0:00:00.150) 0:01:16.131 ****** 2026-01-01 00:44:09.187779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.187799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.187818 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.187835 | orchestrator | 2026-01-01 00:44:09.187846 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-01 00:44:09.187857 | orchestrator | Thursday 01 January 2026 00:44:06 +0000 (0:00:00.174) 0:01:16.305 ****** 2026-01-01 00:44:09.187868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.187894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.187906 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.187916 | orchestrator | 2026-01-01 00:44:09.187927 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-01 00:44:09.187938 | orchestrator | Thursday 01 January 2026 00:44:06 +0000 (0:00:00.302) 0:01:16.607 ****** 2026-01-01 00:44:09.187949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.187959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.187971 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.187982 | orchestrator | 2026-01-01 00:44:09.187993 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-01 00:44:09.188004 | orchestrator | Thursday 01 January 2026 00:44:07 +0000 (0:00:00.162) 0:01:16.770 ****** 2026-01-01 00:44:09.188015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.188026 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.188037 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.188047 | orchestrator | 2026-01-01 00:44:09.188058 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-01 00:44:09.188069 | orchestrator | Thursday 01 January 2026 00:44:07 +0000 (0:00:00.130) 0:01:16.900 ****** 2026-01-01 00:44:09.188080 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:09.188091 | orchestrator | 2026-01-01 00:44:09.188101 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-01 00:44:09.188112 | orchestrator | Thursday 01 January 2026 00:44:07 +0000 (0:00:00.508) 0:01:17.409 ****** 2026-01-01 00:44:09.188132 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:09.188151 | orchestrator | 2026-01-01 00:44:09.188171 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-01 00:44:09.188182 | orchestrator | Thursday 01 January 2026 00:44:08 +0000 (0:00:00.503) 0:01:17.912 ****** 2026-01-01 00:44:09.188193 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:09.188204 | orchestrator | 2026-01-01 00:44:09.188215 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-01 00:44:09.188226 | orchestrator | Thursday 01 January 2026 00:44:08 +0000 (0:00:00.136) 0:01:18.049 ****** 2026-01-01 00:44:09.188237 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'vg_name': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'}) 2026-01-01 00:44:09.188249 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'vg_name': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'}) 2026-01-01 00:44:09.188268 | orchestrator | 2026-01-01 00:44:09.188279 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-01 00:44:09.188290 | orchestrator | Thursday 01 January 2026 00:44:08 +0000 (0:00:00.157) 0:01:18.207 ****** 2026-01-01 00:44:09.188322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.188334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.188345 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.188356 | orchestrator | 2026-01-01 00:44:09.188367 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-01 00:44:09.188379 | orchestrator | Thursday 01 January 2026 00:44:08 +0000 (0:00:00.146) 0:01:18.353 ****** 2026-01-01 00:44:09.188390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.188401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.188412 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.188423 | orchestrator | 2026-01-01 00:44:09.188434 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-01 00:44:09.188444 | orchestrator | Thursday 01 January 2026 00:44:08 +0000 (0:00:00.159) 0:01:18.513 ****** 2026-01-01 00:44:09.188455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'})  2026-01-01 00:44:09.188466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'})  2026-01-01 00:44:09.188477 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:09.188488 | orchestrator | 2026-01-01 00:44:09.188499 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-01 00:44:09.188510 | orchestrator | Thursday 01 January 2026 00:44:09 +0000 (0:00:00.158) 0:01:18.671 ****** 2026-01-01 00:44:09.188520 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 00:44:09.188532 | orchestrator |  "lvm_report": { 2026-01-01 00:44:09.188543 | orchestrator |  "lv": [ 2026-01-01 00:44:09.188553 | orchestrator |  { 2026-01-01 00:44:09.188570 | orchestrator |  "lv_name": "osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4", 2026-01-01 00:44:09.188582 | orchestrator |  "vg_name": "ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4" 2026-01-01 00:44:09.188592 | orchestrator |  }, 2026-01-01 00:44:09.188603 | orchestrator |  { 2026-01-01 00:44:09.188614 | orchestrator |  "lv_name": "osd-block-b87804f1-5161-5843-851c-861f025ab6ce", 2026-01-01 00:44:09.188625 | orchestrator |  "vg_name": "ceph-b87804f1-5161-5843-851c-861f025ab6ce" 2026-01-01 00:44:09.188636 | orchestrator |  } 2026-01-01 00:44:09.188646 | orchestrator |  ], 2026-01-01 00:44:09.188657 | orchestrator |  "pv": [ 2026-01-01 00:44:09.188668 | orchestrator |  { 2026-01-01 00:44:09.188678 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-01 00:44:09.188689 | orchestrator |  "vg_name": "ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4" 2026-01-01 00:44:09.188700 | orchestrator |  }, 2026-01-01 00:44:09.188769 | orchestrator |  { 2026-01-01 00:44:09.188781 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-01 00:44:09.188792 | orchestrator |  "vg_name": "ceph-b87804f1-5161-5843-851c-861f025ab6ce" 2026-01-01 00:44:09.188803 | orchestrator |  } 2026-01-01 00:44:09.188814 | orchestrator |  ] 2026-01-01 00:44:09.188832 | orchestrator |  } 2026-01-01 00:44:09.188844 | orchestrator | } 2026-01-01 00:44:09.188855 | orchestrator | 2026-01-01 00:44:09.188866 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:44:09.188877 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-01 00:44:09.188889 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-01 00:44:09.188900 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-01 00:44:09.188914 | orchestrator | 2026-01-01 00:44:09.188932 | orchestrator | 2026-01-01 00:44:09.188947 | orchestrator | 2026-01-01 00:44:09.188958 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:44:09.188969 | orchestrator | Thursday 01 January 2026 00:44:09 +0000 (0:00:00.149) 0:01:18.821 ****** 2026-01-01 00:44:09.188980 | orchestrator | =============================================================================== 2026-01-01 00:44:09.188991 | orchestrator | Create block VGs -------------------------------------------------------- 5.85s 2026-01-01 00:44:09.189002 | orchestrator | Create block LVs -------------------------------------------------------- 4.16s 2026-01-01 00:44:09.189013 | orchestrator | Add known partitions to the list of available block devices ------------- 1.94s 2026-01-01 00:44:09.189023 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.83s 2026-01-01 00:44:09.189034 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.79s 2026-01-01 00:44:09.189045 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-01-01 00:44:09.189056 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2026-01-01 00:44:09.189067 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2026-01-01 00:44:09.189086 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2026-01-01 00:44:09.665546 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-01-01 00:44:09.665643 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2026-01-01 00:44:09.665654 | orchestrator | Print LVM report data --------------------------------------------------- 1.02s 2026-01-01 00:44:09.665663 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-01-01 00:44:09.665672 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-01-01 00:44:09.665681 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-01-01 00:44:09.665690 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-01-01 00:44:09.665698 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-01-01 00:44:09.665764 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.76s 2026-01-01 00:44:09.665774 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2026-01-01 00:44:09.665783 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.76s 2026-01-01 00:44:22.495790 | orchestrator | 2026-01-01 00:44:22 | INFO  | Task b3475778-a583-4cbc-84ab-315ec975e00e (facts) was prepared for execution. 2026-01-01 00:44:22.495879 | orchestrator | 2026-01-01 00:44:22 | INFO  | It takes a moment until task b3475778-a583-4cbc-84ab-315ec975e00e (facts) has been started and output is visible here. 2026-01-01 00:44:36.325334 | orchestrator | 2026-01-01 00:44:36.325450 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-01 00:44:36.325469 | orchestrator | 2026-01-01 00:44:36.325482 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-01 00:44:36.325495 | orchestrator | Thursday 01 January 2026 00:44:27 +0000 (0:00:00.266) 0:00:00.266 ****** 2026-01-01 00:44:36.325545 | orchestrator | ok: [testbed-manager] 2026-01-01 00:44:36.325567 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:44:36.325586 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:44:36.325607 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:44:36.325625 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:44:36.325644 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:44:36.325663 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:36.325680 | orchestrator | 2026-01-01 00:44:36.325698 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-01 00:44:36.325914 | orchestrator | Thursday 01 January 2026 00:44:28 +0000 (0:00:01.118) 0:00:01.384 ****** 2026-01-01 00:44:36.325942 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:44:36.325957 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:44:36.325969 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:44:36.325982 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:44:36.325995 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:44:36.326008 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:44:36.326099 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:36.326112 | orchestrator | 2026-01-01 00:44:36.326125 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-01 00:44:36.326138 | orchestrator | 2026-01-01 00:44:36.326151 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-01 00:44:36.326164 | orchestrator | Thursday 01 January 2026 00:44:29 +0000 (0:00:01.275) 0:00:02.660 ****** 2026-01-01 00:44:36.326177 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:44:36.326190 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:44:36.326203 | orchestrator | ok: [testbed-manager] 2026-01-01 00:44:36.326216 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:44:36.326229 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:44:36.326241 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:44:36.326252 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:44:36.326262 | orchestrator | 2026-01-01 00:44:36.326273 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-01 00:44:36.326284 | orchestrator | 2026-01-01 00:44:36.326320 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-01 00:44:36.326331 | orchestrator | Thursday 01 January 2026 00:44:35 +0000 (0:00:05.770) 0:00:08.431 ****** 2026-01-01 00:44:36.326342 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:44:36.326353 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:44:36.326364 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:44:36.326375 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:44:36.326386 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:44:36.326396 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:44:36.326407 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:44:36.326418 | orchestrator | 2026-01-01 00:44:36.326429 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:44:36.326441 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326454 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326465 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326476 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326487 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326498 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326524 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:44:36.326535 | orchestrator | 2026-01-01 00:44:36.326546 | orchestrator | 2026-01-01 00:44:36.326558 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:44:36.326569 | orchestrator | Thursday 01 January 2026 00:44:35 +0000 (0:00:00.535) 0:00:08.966 ****** 2026-01-01 00:44:36.326580 | orchestrator | =============================================================================== 2026-01-01 00:44:36.326590 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.77s 2026-01-01 00:44:36.326601 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-01-01 00:44:36.326612 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-01-01 00:44:36.326623 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-01 00:44:48.820260 | orchestrator | 2026-01-01 00:44:48 | INFO  | Task 0d887a02-2dde-4431-ad48-44998985d011 (frr) was prepared for execution. 2026-01-01 00:44:48.820415 | orchestrator | 2026-01-01 00:44:48 | INFO  | It takes a moment until task 0d887a02-2dde-4431-ad48-44998985d011 (frr) has been started and output is visible here. 2026-01-01 00:45:16.096100 | orchestrator | 2026-01-01 00:45:16.096233 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-01 00:45:16.096253 | orchestrator | 2026-01-01 00:45:16.096267 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-01 00:45:16.096297 | orchestrator | Thursday 01 January 2026 00:44:53 +0000 (0:00:00.240) 0:00:00.240 ****** 2026-01-01 00:45:16.096320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:45:16.096333 | orchestrator | 2026-01-01 00:45:16.096345 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-01 00:45:16.096356 | orchestrator | Thursday 01 January 2026 00:44:53 +0000 (0:00:00.228) 0:00:00.468 ****** 2026-01-01 00:45:16.096368 | orchestrator | changed: [testbed-manager] 2026-01-01 00:45:16.096380 | orchestrator | 2026-01-01 00:45:16.096391 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-01 00:45:16.096408 | orchestrator | Thursday 01 January 2026 00:44:54 +0000 (0:00:01.231) 0:00:01.700 ****** 2026-01-01 00:45:16.096419 | orchestrator | changed: [testbed-manager] 2026-01-01 00:45:16.096430 | orchestrator | 2026-01-01 00:45:16.096441 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-01 00:45:16.096452 | orchestrator | Thursday 01 January 2026 00:45:05 +0000 (0:00:10.864) 0:00:12.564 ****** 2026-01-01 00:45:16.096463 | orchestrator | ok: [testbed-manager] 2026-01-01 00:45:16.096474 | orchestrator | 2026-01-01 00:45:16.096485 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-01 00:45:16.096496 | orchestrator | Thursday 01 January 2026 00:45:06 +0000 (0:00:01.069) 0:00:13.634 ****** 2026-01-01 00:45:16.096507 | orchestrator | changed: [testbed-manager] 2026-01-01 00:45:16.096518 | orchestrator | 2026-01-01 00:45:16.096529 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-01 00:45:16.096540 | orchestrator | Thursday 01 January 2026 00:45:07 +0000 (0:00:00.945) 0:00:14.579 ****** 2026-01-01 00:45:16.096551 | orchestrator | ok: [testbed-manager] 2026-01-01 00:45:16.096562 | orchestrator | 2026-01-01 00:45:16.096573 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-01 00:45:16.096585 | orchestrator | Thursday 01 January 2026 00:45:08 +0000 (0:00:01.236) 0:00:15.816 ****** 2026-01-01 00:45:16.096596 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:45:16.096606 | orchestrator | 2026-01-01 00:45:16.096618 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-01 00:45:16.096632 | orchestrator | Thursday 01 January 2026 00:45:09 +0000 (0:00:00.154) 0:00:15.970 ****** 2026-01-01 00:45:16.096673 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:45:16.096686 | orchestrator | 2026-01-01 00:45:16.096699 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-01 00:45:16.096713 | orchestrator | Thursday 01 January 2026 00:45:09 +0000 (0:00:00.160) 0:00:16.130 ****** 2026-01-01 00:45:16.096749 | orchestrator | changed: [testbed-manager] 2026-01-01 00:45:16.096760 | orchestrator | 2026-01-01 00:45:16.096771 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-01 00:45:16.096782 | orchestrator | Thursday 01 January 2026 00:45:10 +0000 (0:00:01.039) 0:00:17.170 ****** 2026-01-01 00:45:16.096792 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-01 00:45:16.096803 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-01 00:45:16.096815 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-01 00:45:16.096826 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-01 00:45:16.096837 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-01 00:45:16.096848 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-01 00:45:16.096859 | orchestrator | 2026-01-01 00:45:16.096870 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-01 00:45:16.096880 | orchestrator | Thursday 01 January 2026 00:45:12 +0000 (0:00:02.313) 0:00:19.484 ****** 2026-01-01 00:45:16.096891 | orchestrator | ok: [testbed-manager] 2026-01-01 00:45:16.096902 | orchestrator | 2026-01-01 00:45:16.096913 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-01 00:45:16.096924 | orchestrator | Thursday 01 January 2026 00:45:14 +0000 (0:00:01.693) 0:00:21.177 ****** 2026-01-01 00:45:16.096934 | orchestrator | changed: [testbed-manager] 2026-01-01 00:45:16.096945 | orchestrator | 2026-01-01 00:45:16.096956 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:45:16.096967 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:45:16.096978 | orchestrator | 2026-01-01 00:45:16.096989 | orchestrator | 2026-01-01 00:45:16.096999 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:45:16.097010 | orchestrator | Thursday 01 January 2026 00:45:15 +0000 (0:00:01.446) 0:00:22.624 ****** 2026-01-01 00:45:16.097021 | orchestrator | =============================================================================== 2026-01-01 00:45:16.097032 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.86s 2026-01-01 00:45:16.097043 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.31s 2026-01-01 00:45:16.097053 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.69s 2026-01-01 00:45:16.097064 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.45s 2026-01-01 00:45:16.097075 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2026-01-01 00:45:16.097104 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.23s 2026-01-01 00:45:16.097116 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.07s 2026-01-01 00:45:16.097126 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.04s 2026-01-01 00:45:16.097137 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.95s 2026-01-01 00:45:16.097148 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-01-01 00:45:16.097159 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-01 00:45:16.097170 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-01-01 00:45:16.532131 | orchestrator | 2026-01-01 00:45:16.535269 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Jan 1 00:45:16 UTC 2026 2026-01-01 00:45:16.535325 | orchestrator | 2026-01-01 00:45:18.622262 | orchestrator | 2026-01-01 00:45:18 | INFO  | Collection nutshell is prepared for execution 2026-01-01 00:45:18.622410 | orchestrator | 2026-01-01 00:45:18 | INFO  | A [0] - dotfiles 2026-01-01 00:45:28.682926 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - homer 2026-01-01 00:45:28.683042 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - netdata 2026-01-01 00:45:28.683059 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - openstackclient 2026-01-01 00:45:28.683072 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - phpmyadmin 2026-01-01 00:45:28.683084 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - common 2026-01-01 00:45:28.685130 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- loadbalancer 2026-01-01 00:45:28.685384 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [2] --- opensearch 2026-01-01 00:45:28.685564 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [2] --- mariadb-ng 2026-01-01 00:45:28.686344 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [3] ---- horizon 2026-01-01 00:45:28.686623 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [3] ---- keystone 2026-01-01 00:45:28.686976 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- neutron 2026-01-01 00:45:28.687753 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [5] ------ wait-for-nova 2026-01-01 00:45:28.687788 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [6] ------- octavia 2026-01-01 00:45:28.690090 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- barbican 2026-01-01 00:45:28.690130 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- designate 2026-01-01 00:45:28.690213 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- ironic 2026-01-01 00:45:28.690234 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- placement 2026-01-01 00:45:28.690245 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- magnum 2026-01-01 00:45:28.691338 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- openvswitch 2026-01-01 00:45:28.691679 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [2] --- ovn 2026-01-01 00:45:28.692145 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- memcached 2026-01-01 00:45:28.692239 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- redis 2026-01-01 00:45:28.692535 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- rabbitmq-ng 2026-01-01 00:45:28.692996 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - kubernetes 2026-01-01 00:45:28.696639 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- kubeconfig 2026-01-01 00:45:28.697011 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- copy-kubeconfig 2026-01-01 00:45:28.697231 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [0] - ceph 2026-01-01 00:45:28.700105 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [1] -- ceph-pools 2026-01-01 00:45:28.700132 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [2] --- copy-ceph-keys 2026-01-01 00:45:28.700144 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [3] ---- cephclient 2026-01-01 00:45:28.700155 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-01 00:45:28.700576 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- wait-for-keystone 2026-01-01 00:45:28.700596 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-01 00:45:28.700931 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [5] ------ glance 2026-01-01 00:45:28.701143 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [5] ------ cinder 2026-01-01 00:45:28.701166 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [5] ------ nova 2026-01-01 00:45:28.701479 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [4] ----- prometheus 2026-01-01 00:45:28.701829 | orchestrator | 2026-01-01 00:45:28 | INFO  | A [5] ------ grafana 2026-01-01 00:45:28.904532 | orchestrator | 2026-01-01 00:45:28 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-01 00:45:28.904651 | orchestrator | 2026-01-01 00:45:28 | INFO  | Tasks are running in the background 2026-01-01 00:45:32.412155 | orchestrator | 2026-01-01 00:45:32 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-01 00:45:34.560001 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:34.560535 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:34.562223 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:34.563556 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:34.565918 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:34.566827 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:34.568904 | orchestrator | 2026-01-01 00:45:34 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:34.568929 | orchestrator | 2026-01-01 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:37.618333 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:37.619522 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:37.620211 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:37.623032 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:37.623615 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:37.624230 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:37.624879 | orchestrator | 2026-01-01 00:45:37 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:37.624998 | orchestrator | 2026-01-01 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:40.683155 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:40.683256 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:40.683676 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:40.684241 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:40.684794 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:40.685365 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:40.686254 | orchestrator | 2026-01-01 00:45:40 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:40.686423 | orchestrator | 2026-01-01 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:43.734168 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:43.734961 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:43.737287 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:43.737977 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:43.738322 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:43.741649 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:43.744178 | orchestrator | 2026-01-01 00:45:43 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:43.744219 | orchestrator | 2026-01-01 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:46.786237 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:46.786399 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:46.787181 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:46.787683 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:46.788985 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:46.788998 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:46.789003 | orchestrator | 2026-01-01 00:45:46 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:46.789009 | orchestrator | 2026-01-01 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:49.992967 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:49.993053 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:49.993063 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:49.993081 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:49.993090 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:49.993957 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:49.995615 | orchestrator | 2026-01-01 00:45:49 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:49.995636 | orchestrator | 2026-01-01 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:53.084394 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:53.121072 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:53.121158 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:53.121202 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:53.121214 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:53.121225 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:53.121235 | orchestrator | 2026-01-01 00:45:53 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:53.121246 | orchestrator | 2026-01-01 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:56.174432 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:56.175198 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:56.175234 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:56.175872 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:56.177008 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:56.180075 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:56.183629 | orchestrator | 2026-01-01 00:45:56 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:56.183920 | orchestrator | 2026-01-01 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:45:59.263706 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:45:59.263829 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:45:59.263843 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:45:59.263851 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:45:59.263860 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:45:59.267187 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:45:59.267249 | orchestrator | 2026-01-01 00:45:59 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:45:59.267260 | orchestrator | 2026-01-01 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:02.411944 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:02.414374 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:02.419181 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:02.419299 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:02.420329 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state STARTED 2026-01-01 00:46:02.420365 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:02.421110 | orchestrator | 2026-01-01 00:46:02 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:02.421142 | orchestrator | 2026-01-01 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:05.478665 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:05.478802 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:05.481356 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:05.482347 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:05.483109 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:05.485792 | orchestrator | 2026-01-01 00:46:05.485860 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-01 00:46:05.485875 | orchestrator | 2026-01-01 00:46:05.485887 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-01 00:46:05.485899 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.958) 0:00:00.958 ****** 2026-01-01 00:46:05.485910 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:46:05.485922 | orchestrator | changed: [testbed-manager] 2026-01-01 00:46:05.485933 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:46:05.485944 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:46:05.485955 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:46:05.485966 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:46:05.485976 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:46:05.485987 | orchestrator | 2026-01-01 00:46:05.485998 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-01 00:46:05.486009 | orchestrator | Thursday 01 January 2026 00:45:50 +0000 (0:00:04.740) 0:00:05.699 ****** 2026-01-01 00:46:05.486071 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-01 00:46:05.486084 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-01 00:46:05.486095 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-01 00:46:05.486105 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-01 00:46:05.486116 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-01 00:46:05.486127 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-01 00:46:05.486138 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-01 00:46:05.486148 | orchestrator | 2026-01-01 00:46:05.486160 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-01 00:46:05.486171 | orchestrator | Thursday 01 January 2026 00:45:52 +0000 (0:00:02.448) 0:00:08.147 ****** 2026-01-01 00:46:05.486187 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:51.011605', 'end': '2026-01-01 00:45:51.020524', 'delta': '0:00:00.008919', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.491427 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:50.960197', 'end': '2026-01-01 00:45:50.967062', 'delta': '0:00:00.006865', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.491513 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:51.045775', 'end': '2026-01-01 00:45:51.051655', 'delta': '0:00:00.005880', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.492983 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:51.226250', 'end': '2026-01-01 00:45:51.234517', 'delta': '0:00:00.008267', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.493013 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:51.887074', 'end': '2026-01-01 00:45:51.893332', 'delta': '0:00:00.006258', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.493022 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:52.115880', 'end': '2026-01-01 00:45:52.122147', 'delta': '0:00:00.006267', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.493030 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-01 00:45:52.532665', 'end': '2026-01-01 00:45:52.542342', 'delta': '0:00:00.009677', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-01 00:46:05.493051 | orchestrator | 2026-01-01 00:46:05.493060 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-01 00:46:05.493070 | orchestrator | Thursday 01 January 2026 00:45:56 +0000 (0:00:04.096) 0:00:12.243 ****** 2026-01-01 00:46:05.493079 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-01 00:46:05.493087 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-01 00:46:05.493095 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-01 00:46:05.493103 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-01 00:46:05.493111 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-01 00:46:05.493118 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-01 00:46:05.493126 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-01 00:46:05.493134 | orchestrator | 2026-01-01 00:46:05.493142 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-01 00:46:05.493150 | orchestrator | Thursday 01 January 2026 00:45:59 +0000 (0:00:02.617) 0:00:14.861 ****** 2026-01-01 00:46:05.493159 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-01 00:46:05.493172 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-01 00:46:05.493185 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-01 00:46:05.493197 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-01 00:46:05.493210 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-01 00:46:05.493222 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-01 00:46:05.493234 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-01 00:46:05.493246 | orchestrator | 2026-01-01 00:46:05.493258 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:46:05.493283 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493304 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493318 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493331 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493341 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493349 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493357 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:46:05.493365 | orchestrator | 2026-01-01 00:46:05.493372 | orchestrator | 2026-01-01 00:46:05.493381 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:46:05.493388 | orchestrator | Thursday 01 January 2026 00:46:02 +0000 (0:00:03.084) 0:00:17.946 ****** 2026-01-01 00:46:05.493397 | orchestrator | =============================================================================== 2026-01-01 00:46:05.493412 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.74s 2026-01-01 00:46:05.493420 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 4.10s 2026-01-01 00:46:05.493428 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.08s 2026-01-01 00:46:05.493436 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.62s 2026-01-01 00:46:05.493444 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.45s 2026-01-01 00:46:05.493452 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task 8b04016c-75e2-4983-8709-aec3ad49f367 is in state SUCCESS 2026-01-01 00:46:05.493460 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:05.493468 | orchestrator | 2026-01-01 00:46:05 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:05.493476 | orchestrator | 2026-01-01 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:08.618710 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:08.618850 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:08.620134 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:08.620154 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:08.624961 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:08.625624 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:08.626303 | orchestrator | 2026-01-01 00:46:08 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:08.626354 | orchestrator | 2026-01-01 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:11.674566 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:11.675172 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:11.676703 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:11.678912 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:11.681002 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:11.681802 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:11.682433 | orchestrator | 2026-01-01 00:46:11 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:11.682453 | orchestrator | 2026-01-01 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:14.732246 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:14.741539 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:14.748463 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:14.754090 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:14.759711 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:14.762394 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:14.768070 | orchestrator | 2026-01-01 00:46:14 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:14.768310 | orchestrator | 2026-01-01 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:17.894475 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:17.894571 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:17.894582 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:17.894592 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:17.894600 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:17.894609 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:17.894618 | orchestrator | 2026-01-01 00:46:17 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:17.894627 | orchestrator | 2026-01-01 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:20.933412 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:20.936222 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:20.937155 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:20.938004 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:20.939413 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:20.942720 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:20.943762 | orchestrator | 2026-01-01 00:46:20 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:20.943795 | orchestrator | 2026-01-01 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:24.235658 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:24.235787 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:24.235801 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state STARTED 2026-01-01 00:46:24.235811 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:24.235820 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:24.235829 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:24.235838 | orchestrator | 2026-01-01 00:46:24 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:24.235848 | orchestrator | 2026-01-01 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:27.119097 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:27.119206 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:27.119219 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task e1a6e34f-afb1-4738-87c3-8851202e4a58 is in state SUCCESS 2026-01-01 00:46:27.119226 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:27.119250 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:27.119257 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:27.119264 | orchestrator | 2026-01-01 00:46:27 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:27.119272 | orchestrator | 2026-01-01 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:30.144714 | orchestrator | 2026-01-01 00:46:30 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:30.145489 | orchestrator | 2026-01-01 00:46:30 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:30.149422 | orchestrator | 2026-01-01 00:46:30 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:30.151355 | orchestrator | 2026-01-01 00:46:30 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:30.153560 | orchestrator | 2026-01-01 00:46:30 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:30.154683 | orchestrator | 2026-01-01 00:46:30 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:30.154868 | orchestrator | 2026-01-01 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:33.210672 | orchestrator | 2026-01-01 00:46:33 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:33.210850 | orchestrator | 2026-01-01 00:46:33 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:33.211304 | orchestrator | 2026-01-01 00:46:33 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state STARTED 2026-01-01 00:46:33.211333 | orchestrator | 2026-01-01 00:46:33 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:33.211342 | orchestrator | 2026-01-01 00:46:33 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:33.213471 | orchestrator | 2026-01-01 00:46:33 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:33.215285 | orchestrator | 2026-01-01 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:36.274929 | orchestrator | 2026-01-01 00:46:36 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:36.275460 | orchestrator | 2026-01-01 00:46:36 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:36.278358 | orchestrator | 2026-01-01 00:46:36 | INFO  | Task dda2f3f2-a644-4dfe-99a8-24372dbe4bdc is in state SUCCESS 2026-01-01 00:46:36.283897 | orchestrator | 2026-01-01 00:46:36 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:36.290316 | orchestrator | 2026-01-01 00:46:36 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:36.313190 | orchestrator | 2026-01-01 00:46:36 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:36.313326 | orchestrator | 2026-01-01 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:39.363891 | orchestrator | 2026-01-01 00:46:39 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:39.363987 | orchestrator | 2026-01-01 00:46:39 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:39.371886 | orchestrator | 2026-01-01 00:46:39 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:39.380321 | orchestrator | 2026-01-01 00:46:39 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:39.383414 | orchestrator | 2026-01-01 00:46:39 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:39.383468 | orchestrator | 2026-01-01 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:42.567384 | orchestrator | 2026-01-01 00:46:42 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:42.569069 | orchestrator | 2026-01-01 00:46:42 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:42.571291 | orchestrator | 2026-01-01 00:46:42 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:42.573417 | orchestrator | 2026-01-01 00:46:42 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:42.574781 | orchestrator | 2026-01-01 00:46:42 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:42.576462 | orchestrator | 2026-01-01 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:45.635661 | orchestrator | 2026-01-01 00:46:45 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:45.638391 | orchestrator | 2026-01-01 00:46:45 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:45.639038 | orchestrator | 2026-01-01 00:46:45 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:45.641799 | orchestrator | 2026-01-01 00:46:45 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:45.643079 | orchestrator | 2026-01-01 00:46:45 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:45.643172 | orchestrator | 2026-01-01 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:48.682354 | orchestrator | 2026-01-01 00:46:48 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:48.683876 | orchestrator | 2026-01-01 00:46:48 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:48.685463 | orchestrator | 2026-01-01 00:46:48 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:48.688339 | orchestrator | 2026-01-01 00:46:48 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:48.688394 | orchestrator | 2026-01-01 00:46:48 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:48.688416 | orchestrator | 2026-01-01 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:51.743856 | orchestrator | 2026-01-01 00:46:51 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:51.744942 | orchestrator | 2026-01-01 00:46:51 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:51.746444 | orchestrator | 2026-01-01 00:46:51 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:51.746926 | orchestrator | 2026-01-01 00:46:51 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:51.748535 | orchestrator | 2026-01-01 00:46:51 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:51.748562 | orchestrator | 2026-01-01 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:54.795483 | orchestrator | 2026-01-01 00:46:54 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:54.795872 | orchestrator | 2026-01-01 00:46:54 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:54.800690 | orchestrator | 2026-01-01 00:46:54 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:54.800873 | orchestrator | 2026-01-01 00:46:54 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:54.801467 | orchestrator | 2026-01-01 00:46:54 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:54.801492 | orchestrator | 2026-01-01 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:46:57.904427 | orchestrator | 2026-01-01 00:46:57 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:46:57.904526 | orchestrator | 2026-01-01 00:46:57 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:46:57.905283 | orchestrator | 2026-01-01 00:46:57 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:46:57.907175 | orchestrator | 2026-01-01 00:46:57 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:46:57.908432 | orchestrator | 2026-01-01 00:46:57 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:46:57.908453 | orchestrator | 2026-01-01 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:00.947856 | orchestrator | 2026-01-01 00:47:00 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:00.949771 | orchestrator | 2026-01-01 00:47:00 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:47:00.951209 | orchestrator | 2026-01-01 00:47:00 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:00.953576 | orchestrator | 2026-01-01 00:47:00 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:00.954351 | orchestrator | 2026-01-01 00:47:00 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:00.954393 | orchestrator | 2026-01-01 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:04.035709 | orchestrator | 2026-01-01 00:47:04 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:04.036022 | orchestrator | 2026-01-01 00:47:04 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:47:04.036995 | orchestrator | 2026-01-01 00:47:04 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:04.039351 | orchestrator | 2026-01-01 00:47:04 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:04.042385 | orchestrator | 2026-01-01 00:47:04 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:04.042441 | orchestrator | 2026-01-01 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:07.076961 | orchestrator | 2026-01-01 00:47:07 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:07.080829 | orchestrator | 2026-01-01 00:47:07 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:47:07.081095 | orchestrator | 2026-01-01 00:47:07 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:07.083232 | orchestrator | 2026-01-01 00:47:07 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:07.084413 | orchestrator | 2026-01-01 00:47:07 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:07.085481 | orchestrator | 2026-01-01 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:10.119066 | orchestrator | 2026-01-01 00:47:10 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:10.120841 | orchestrator | 2026-01-01 00:47:10 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:47:10.120884 | orchestrator | 2026-01-01 00:47:10 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:10.123594 | orchestrator | 2026-01-01 00:47:10 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:10.123657 | orchestrator | 2026-01-01 00:47:10 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:10.123671 | orchestrator | 2026-01-01 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:13.146463 | orchestrator | 2026-01-01 00:47:13 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:13.147931 | orchestrator | 2026-01-01 00:47:13 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state STARTED 2026-01-01 00:47:13.148446 | orchestrator | 2026-01-01 00:47:13 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:13.150080 | orchestrator | 2026-01-01 00:47:13 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:13.150668 | orchestrator | 2026-01-01 00:47:13 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:13.150695 | orchestrator | 2026-01-01 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:16.190191 | orchestrator | 2026-01-01 00:47:16 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:16.190320 | orchestrator | 2026-01-01 00:47:16 | INFO  | Task f5c93d48-555a-4f92-a978-6f7036c63c7e is in state SUCCESS 2026-01-01 00:47:16.193807 | orchestrator | 2026-01-01 00:47:16.194584 | orchestrator | 2026-01-01 00:47:16.194618 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-01 00:47:16.194628 | orchestrator | 2026-01-01 00:47:16.194635 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-01 00:47:16.194643 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.720) 0:00:00.720 ****** 2026-01-01 00:47:16.194650 | orchestrator | ok: [testbed-manager] => { 2026-01-01 00:47:16.194659 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-01 00:47:16.194667 | orchestrator | } 2026-01-01 00:47:16.194674 | orchestrator | 2026-01-01 00:47:16.194681 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-01 00:47:16.194688 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.393) 0:00:01.114 ****** 2026-01-01 00:47:16.194695 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.194702 | orchestrator | 2026-01-01 00:47:16.194709 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-01 00:47:16.194716 | orchestrator | Thursday 01 January 2026 00:45:47 +0000 (0:00:01.533) 0:00:02.647 ****** 2026-01-01 00:47:16.194724 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-01 00:47:16.195292 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-01 00:47:16.195315 | orchestrator | 2026-01-01 00:47:16.195322 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-01 00:47:16.195329 | orchestrator | Thursday 01 January 2026 00:45:48 +0000 (0:00:01.397) 0:00:04.045 ****** 2026-01-01 00:47:16.195336 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.195343 | orchestrator | 2026-01-01 00:47:16.195350 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-01 00:47:16.195873 | orchestrator | Thursday 01 January 2026 00:45:51 +0000 (0:00:02.520) 0:00:06.565 ****** 2026-01-01 00:47:16.195884 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.195891 | orchestrator | 2026-01-01 00:47:16.195897 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-01 00:47:16.195904 | orchestrator | Thursday 01 January 2026 00:45:52 +0000 (0:00:01.496) 0:00:08.061 ****** 2026-01-01 00:47:16.195910 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-01 00:47:16.195917 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.195923 | orchestrator | 2026-01-01 00:47:16.195930 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-01 00:47:16.195936 | orchestrator | Thursday 01 January 2026 00:46:18 +0000 (0:00:25.314) 0:00:33.376 ****** 2026-01-01 00:47:16.195943 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.195950 | orchestrator | 2026-01-01 00:47:16.195981 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:47:16.195989 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.195998 | orchestrator | 2026-01-01 00:47:16.196004 | orchestrator | 2026-01-01 00:47:16.196010 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:47:16.196016 | orchestrator | Thursday 01 January 2026 00:46:23 +0000 (0:00:05.633) 0:00:39.010 ****** 2026-01-01 00:47:16.196023 | orchestrator | =============================================================================== 2026-01-01 00:47:16.196030 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.31s 2026-01-01 00:47:16.196037 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.63s 2026-01-01 00:47:16.196043 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.52s 2026-01-01 00:47:16.196049 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.53s 2026-01-01 00:47:16.196055 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.50s 2026-01-01 00:47:16.196091 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.40s 2026-01-01 00:47:16.196198 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.39s 2026-01-01 00:47:16.196239 | orchestrator | 2026-01-01 00:47:16.196363 | orchestrator | 2026-01-01 00:47:16.196400 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-01 00:47:16.196408 | orchestrator | 2026-01-01 00:47:16.197424 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-01 00:47:16.197457 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.779) 0:00:00.780 ****** 2026-01-01 00:47:16.197471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-01 00:47:16.197484 | orchestrator | 2026-01-01 00:47:16.197495 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-01 00:47:16.197506 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.613) 0:00:01.393 ****** 2026-01-01 00:47:16.197517 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-01 00:47:16.197528 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-01 00:47:16.197539 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-01 00:47:16.197574 | orchestrator | 2026-01-01 00:47:16.197586 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-01 00:47:16.197597 | orchestrator | Thursday 01 January 2026 00:45:47 +0000 (0:00:02.020) 0:00:03.413 ****** 2026-01-01 00:47:16.197608 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.197618 | orchestrator | 2026-01-01 00:47:16.197629 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-01 00:47:16.197641 | orchestrator | Thursday 01 January 2026 00:45:49 +0000 (0:00:02.207) 0:00:05.621 ****** 2026-01-01 00:47:16.197681 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-01 00:47:16.197694 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.197706 | orchestrator | 2026-01-01 00:47:16.197716 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-01 00:47:16.197727 | orchestrator | Thursday 01 January 2026 00:46:25 +0000 (0:00:35.191) 0:00:40.813 ****** 2026-01-01 00:47:16.197738 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.197782 | orchestrator | 2026-01-01 00:47:16.197793 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-01 00:47:16.197804 | orchestrator | Thursday 01 January 2026 00:46:28 +0000 (0:00:03.242) 0:00:44.055 ****** 2026-01-01 00:47:16.197814 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.197825 | orchestrator | 2026-01-01 00:47:16.197836 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-01 00:47:16.197846 | orchestrator | Thursday 01 January 2026 00:46:29 +0000 (0:00:00.981) 0:00:45.037 ****** 2026-01-01 00:47:16.197857 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.197868 | orchestrator | 2026-01-01 00:47:16.197879 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-01 00:47:16.197889 | orchestrator | Thursday 01 January 2026 00:46:32 +0000 (0:00:03.409) 0:00:48.446 ****** 2026-01-01 00:47:16.197900 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.197910 | orchestrator | 2026-01-01 00:47:16.197934 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-01 00:47:16.197946 | orchestrator | Thursday 01 January 2026 00:46:33 +0000 (0:00:01.103) 0:00:49.550 ****** 2026-01-01 00:47:16.197964 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.197982 | orchestrator | 2026-01-01 00:47:16.198000 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-01 00:47:16.198069 | orchestrator | Thursday 01 January 2026 00:46:34 +0000 (0:00:00.930) 0:00:50.480 ****** 2026-01-01 00:47:16.198094 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.198112 | orchestrator | 2026-01-01 00:47:16.198131 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:47:16.198143 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.198155 | orchestrator | 2026-01-01 00:47:16.198165 | orchestrator | 2026-01-01 00:47:16.198176 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:47:16.198187 | orchestrator | Thursday 01 January 2026 00:46:35 +0000 (0:00:00.406) 0:00:50.887 ****** 2026-01-01 00:47:16.198199 | orchestrator | =============================================================================== 2026-01-01 00:47:16.198217 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.19s 2026-01-01 00:47:16.198244 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.41s 2026-01-01 00:47:16.198263 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.24s 2026-01-01 00:47:16.198280 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.21s 2026-01-01 00:47:16.198296 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.02s 2026-01-01 00:47:16.198313 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.10s 2026-01-01 00:47:16.198331 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.98s 2026-01-01 00:47:16.198363 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.93s 2026-01-01 00:47:16.198382 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.61s 2026-01-01 00:47:16.198399 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.41s 2026-01-01 00:47:16.198417 | orchestrator | 2026-01-01 00:47:16.198434 | orchestrator | 2026-01-01 00:47:16.198454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:47:16.198472 | orchestrator | 2026-01-01 00:47:16.198490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:47:16.198502 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.520) 0:00:00.520 ****** 2026-01-01 00:47:16.198513 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-01 00:47:16.198524 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-01 00:47:16.198534 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-01 00:47:16.198545 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-01 00:47:16.198555 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-01 00:47:16.198566 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-01 00:47:16.198577 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-01 00:47:16.198589 | orchestrator | 2026-01-01 00:47:16.198608 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-01 00:47:16.198619 | orchestrator | 2026-01-01 00:47:16.198629 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-01 00:47:16.198640 | orchestrator | Thursday 01 January 2026 00:45:46 +0000 (0:00:01.589) 0:00:02.110 ****** 2026-01-01 00:47:16.198666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:47:16.198680 | orchestrator | 2026-01-01 00:47:16.198691 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-01 00:47:16.198701 | orchestrator | Thursday 01 January 2026 00:45:48 +0000 (0:00:01.552) 0:00:03.662 ****** 2026-01-01 00:47:16.198712 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:47:16.198723 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:16.198734 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:47:16.198825 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:47:16.198840 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:16.198865 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.198877 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:16.198887 | orchestrator | 2026-01-01 00:47:16.198898 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-01 00:47:16.198909 | orchestrator | Thursday 01 January 2026 00:45:50 +0000 (0:00:02.730) 0:00:06.392 ****** 2026-01-01 00:47:16.198920 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:47:16.198931 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.198942 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:47:16.198953 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:16.198963 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:16.198974 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:47:16.198985 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:16.198995 | orchestrator | 2026-01-01 00:47:16.199006 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-01 00:47:16.199017 | orchestrator | Thursday 01 January 2026 00:45:54 +0000 (0:00:03.565) 0:00:09.958 ****** 2026-01-01 00:47:16.199028 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.199039 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:47:16.199050 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:47:16.199061 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:47:16.199072 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:47:16.199092 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:47:16.199103 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:47:16.199113 | orchestrator | 2026-01-01 00:47:16.199131 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-01 00:47:16.199142 | orchestrator | Thursday 01 January 2026 00:45:57 +0000 (0:00:02.876) 0:00:12.834 ****** 2026-01-01 00:47:16.199152 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:47:16.199163 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:47:16.199173 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:47:16.199184 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:47:16.199195 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:47:16.199205 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:47:16.199219 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.199237 | orchestrator | 2026-01-01 00:47:16.199265 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-01 00:47:16.199285 | orchestrator | Thursday 01 January 2026 00:46:11 +0000 (0:00:14.300) 0:00:27.134 ****** 2026-01-01 00:47:16.199303 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:47:16.199320 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:47:16.199337 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:47:16.199354 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:47:16.199370 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:47:16.199389 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:47:16.199406 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.199425 | orchestrator | 2026-01-01 00:47:16.199444 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-01 00:47:16.199463 | orchestrator | Thursday 01 January 2026 00:46:53 +0000 (0:00:41.962) 0:01:09.097 ****** 2026-01-01 00:47:16.199482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:47:16.199497 | orchestrator | 2026-01-01 00:47:16.199508 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-01 00:47:16.199518 | orchestrator | Thursday 01 January 2026 00:46:55 +0000 (0:00:01.339) 0:01:10.436 ****** 2026-01-01 00:47:16.199529 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-01 00:47:16.199541 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-01 00:47:16.199551 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-01 00:47:16.199562 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-01 00:47:16.199572 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-01 00:47:16.199583 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-01 00:47:16.199594 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-01 00:47:16.199605 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-01 00:47:16.199615 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-01 00:47:16.199626 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-01 00:47:16.199636 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-01 00:47:16.199647 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-01 00:47:16.199657 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-01 00:47:16.199668 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-01 00:47:16.199678 | orchestrator | 2026-01-01 00:47:16.199689 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-01 00:47:16.199700 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:05.373) 0:01:15.810 ****** 2026-01-01 00:47:16.199711 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.199721 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:47:16.199732 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:47:16.199772 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:47:16.199785 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:16.199806 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:16.199817 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:16.199832 | orchestrator | 2026-01-01 00:47:16.199851 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-01 00:47:16.199864 | orchestrator | Thursday 01 January 2026 00:47:01 +0000 (0:00:01.291) 0:01:17.101 ****** 2026-01-01 00:47:16.199880 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.199896 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:47:16.199912 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:47:16.199923 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:47:16.199938 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:47:16.199955 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:47:16.199973 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:47:16.199991 | orchestrator | 2026-01-01 00:47:16.200006 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-01 00:47:16.200028 | orchestrator | Thursday 01 January 2026 00:47:02 +0000 (0:00:01.214) 0:01:18.316 ****** 2026-01-01 00:47:16.200040 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:16.200050 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:16.200061 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:47:16.200071 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:47:16.200082 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:47:16.200092 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.200103 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:16.200114 | orchestrator | 2026-01-01 00:47:16.200125 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-01 00:47:16.200139 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:01.828) 0:01:20.144 ****** 2026-01-01 00:47:16.200155 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:47:16.200167 | orchestrator | ok: [testbed-manager] 2026-01-01 00:47:16.200177 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:47:16.200188 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:47:16.200198 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:47:16.200209 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:47:16.200219 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:47:16.200230 | orchestrator | 2026-01-01 00:47:16.200240 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-01 00:47:16.200251 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:01.850) 0:01:21.995 ****** 2026-01-01 00:47:16.200269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-01 00:47:16.200282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:47:16.200293 | orchestrator | 2026-01-01 00:47:16.200304 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-01 00:47:16.200315 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:01.316) 0:01:23.312 ****** 2026-01-01 00:47:16.200325 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.200336 | orchestrator | 2026-01-01 00:47:16.200346 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-01 00:47:16.200357 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:02.120) 0:01:25.432 ****** 2026-01-01 00:47:16.200368 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:47:16.200378 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:47:16.200389 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:47:16.200399 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:47:16.200409 | orchestrator | changed: [testbed-manager] 2026-01-01 00:47:16.200420 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:47:16.200430 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:47:16.200441 | orchestrator | 2026-01-01 00:47:16.200451 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:47:16.200462 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200482 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200493 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200504 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200515 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200526 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200536 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:47:16.200547 | orchestrator | 2026-01-01 00:47:16.200558 | orchestrator | 2026-01-01 00:47:16.200569 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:47:16.200580 | orchestrator | Thursday 01 January 2026 00:47:13 +0000 (0:00:03.272) 0:01:28.705 ****** 2026-01-01 00:47:16.200590 | orchestrator | =============================================================================== 2026-01-01 00:47:16.200601 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.96s 2026-01-01 00:47:16.200611 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.30s 2026-01-01 00:47:16.200622 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.37s 2026-01-01 00:47:16.200633 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.56s 2026-01-01 00:47:16.200643 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.27s 2026-01-01 00:47:16.200654 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.88s 2026-01-01 00:47:16.200664 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.73s 2026-01-01 00:47:16.200674 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.12s 2026-01-01 00:47:16.200685 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.85s 2026-01-01 00:47:16.200695 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.83s 2026-01-01 00:47:16.200706 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.59s 2026-01-01 00:47:16.200723 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.55s 2026-01-01 00:47:16.200734 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.34s 2026-01-01 00:47:16.200769 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.32s 2026-01-01 00:47:16.200783 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.29s 2026-01-01 00:47:16.200794 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.21s 2026-01-01 00:47:16.200805 | orchestrator | 2026-01-01 00:47:16 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:16.200816 | orchestrator | 2026-01-01 00:47:16 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:16.200827 | orchestrator | 2026-01-01 00:47:16 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:16.200837 | orchestrator | 2026-01-01 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:19.242192 | orchestrator | 2026-01-01 00:47:19 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:19.243844 | orchestrator | 2026-01-01 00:47:19 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state STARTED 2026-01-01 00:47:19.246162 | orchestrator | 2026-01-01 00:47:19 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:19.247899 | orchestrator | 2026-01-01 00:47:19 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:19.247934 | orchestrator | 2026-01-01 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:22.322235 | orchestrator | 2026-01-01 00:47:22 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:22.322382 | orchestrator | 2026-01-01 00:47:22 | INFO  | Task bcdfe1f8-a7d7-4aea-9a4f-a0dc6829ea9c is in state SUCCESS 2026-01-01 00:47:22.324090 | orchestrator | 2026-01-01 00:47:22 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:22.326354 | orchestrator | 2026-01-01 00:47:22 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:22.326414 | orchestrator | 2026-01-01 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:25.383675 | orchestrator | 2026-01-01 00:47:25 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:25.386446 | orchestrator | 2026-01-01 00:47:25 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:25.388117 | orchestrator | 2026-01-01 00:47:25 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:25.388348 | orchestrator | 2026-01-01 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:28.431524 | orchestrator | 2026-01-01 00:47:28 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:28.432370 | orchestrator | 2026-01-01 00:47:28 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:28.433650 | orchestrator | 2026-01-01 00:47:28 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:28.433692 | orchestrator | 2026-01-01 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:31.523487 | orchestrator | 2026-01-01 00:47:31 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:31.527273 | orchestrator | 2026-01-01 00:47:31 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:31.527990 | orchestrator | 2026-01-01 00:47:31 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:31.528055 | orchestrator | 2026-01-01 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:34.606176 | orchestrator | 2026-01-01 00:47:34 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:34.614431 | orchestrator | 2026-01-01 00:47:34 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:34.617541 | orchestrator | 2026-01-01 00:47:34 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:34.617617 | orchestrator | 2026-01-01 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:37.695820 | orchestrator | 2026-01-01 00:47:37 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:37.696603 | orchestrator | 2026-01-01 00:47:37 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:37.698177 | orchestrator | 2026-01-01 00:47:37 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:37.698231 | orchestrator | 2026-01-01 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:40.749484 | orchestrator | 2026-01-01 00:47:40 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:40.750536 | orchestrator | 2026-01-01 00:47:40 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:40.751674 | orchestrator | 2026-01-01 00:47:40 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:40.751706 | orchestrator | 2026-01-01 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:43.790967 | orchestrator | 2026-01-01 00:47:43 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:43.791891 | orchestrator | 2026-01-01 00:47:43 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:43.793187 | orchestrator | 2026-01-01 00:47:43 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:43.793417 | orchestrator | 2026-01-01 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:46.835740 | orchestrator | 2026-01-01 00:47:46 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:46.836658 | orchestrator | 2026-01-01 00:47:46 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:46.838375 | orchestrator | 2026-01-01 00:47:46 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:46.838423 | orchestrator | 2026-01-01 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:49.885567 | orchestrator | 2026-01-01 00:47:49 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:49.888575 | orchestrator | 2026-01-01 00:47:49 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:49.890836 | orchestrator | 2026-01-01 00:47:49 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:49.891148 | orchestrator | 2026-01-01 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:52.937051 | orchestrator | 2026-01-01 00:47:52 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:52.939528 | orchestrator | 2026-01-01 00:47:52 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:52.941816 | orchestrator | 2026-01-01 00:47:52 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:52.941868 | orchestrator | 2026-01-01 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:55.996201 | orchestrator | 2026-01-01 00:47:55 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:55.998191 | orchestrator | 2026-01-01 00:47:55 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:56.000787 | orchestrator | 2026-01-01 00:47:56 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:56.000828 | orchestrator | 2026-01-01 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:47:59.052868 | orchestrator | 2026-01-01 00:47:59 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:47:59.054598 | orchestrator | 2026-01-01 00:47:59 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:47:59.056084 | orchestrator | 2026-01-01 00:47:59 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:47:59.056856 | orchestrator | 2026-01-01 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:02.098603 | orchestrator | 2026-01-01 00:48:02 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:02.101236 | orchestrator | 2026-01-01 00:48:02 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:02.104195 | orchestrator | 2026-01-01 00:48:02 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:48:02.104316 | orchestrator | 2026-01-01 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:05.153454 | orchestrator | 2026-01-01 00:48:05 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:05.154679 | orchestrator | 2026-01-01 00:48:05 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:05.156978 | orchestrator | 2026-01-01 00:48:05 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:48:05.157038 | orchestrator | 2026-01-01 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:08.204041 | orchestrator | 2026-01-01 00:48:08 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:08.206427 | orchestrator | 2026-01-01 00:48:08 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:08.208791 | orchestrator | 2026-01-01 00:48:08 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state STARTED 2026-01-01 00:48:08.209003 | orchestrator | 2026-01-01 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:11.258131 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:11.258804 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:11.262978 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:11.264472 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:11.265720 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:11.266907 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:11.284743 | orchestrator | 2026-01-01 00:48:11.284834 | orchestrator | 2026-01-01 00:48:11.284847 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-01 00:48:11.284858 | orchestrator | 2026-01-01 00:48:11.284868 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-01 00:48:11.284878 | orchestrator | Thursday 01 January 2026 00:46:07 +0000 (0:00:00.246) 0:00:00.246 ****** 2026-01-01 00:48:11.284888 | orchestrator | ok: [testbed-manager] 2026-01-01 00:48:11.284899 | orchestrator | 2026-01-01 00:48:11.284909 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-01 00:48:11.284919 | orchestrator | Thursday 01 January 2026 00:46:08 +0000 (0:00:00.765) 0:00:01.012 ****** 2026-01-01 00:48:11.284929 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-01 00:48:11.284939 | orchestrator | 2026-01-01 00:48:11.284948 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-01 00:48:11.284958 | orchestrator | Thursday 01 January 2026 00:46:09 +0000 (0:00:01.252) 0:00:02.265 ****** 2026-01-01 00:48:11.284968 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.284978 | orchestrator | 2026-01-01 00:48:11.284987 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-01 00:48:11.284997 | orchestrator | Thursday 01 January 2026 00:46:10 +0000 (0:00:01.560) 0:00:03.825 ****** 2026-01-01 00:48:11.285007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-01 00:48:11.285039 | orchestrator | ok: [testbed-manager] 2026-01-01 00:48:11.285049 | orchestrator | 2026-01-01 00:48:11.285059 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-01 00:48:11.285068 | orchestrator | Thursday 01 January 2026 00:47:09 +0000 (0:00:58.827) 0:01:02.653 ****** 2026-01-01 00:48:11.285078 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.285090 | orchestrator | 2026-01-01 00:48:11.285140 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:48:11.285157 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:11.285176 | orchestrator | 2026-01-01 00:48:11.285194 | orchestrator | 2026-01-01 00:48:11.285206 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:48:11.285216 | orchestrator | Thursday 01 January 2026 00:47:21 +0000 (0:00:11.445) 0:01:14.098 ****** 2026-01-01 00:48:11.285226 | orchestrator | =============================================================================== 2026-01-01 00:48:11.285235 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.83s 2026-01-01 00:48:11.285245 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.45s 2026-01-01 00:48:11.285254 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.56s 2026-01-01 00:48:11.285268 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.25s 2026-01-01 00:48:11.285283 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.77s 2026-01-01 00:48:11.285295 | orchestrator | 2026-01-01 00:48:11.285306 | orchestrator | 2026-01-01 00:48:11.285317 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-01 00:48:11.285327 | orchestrator | 2026-01-01 00:48:11.285339 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-01 00:48:11.285350 | orchestrator | Thursday 01 January 2026 00:45:34 +0000 (0:00:00.306) 0:00:00.306 ****** 2026-01-01 00:48:11.285361 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:48:11.285373 | orchestrator | 2026-01-01 00:48:11.285384 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-01 00:48:11.285395 | orchestrator | Thursday 01 January 2026 00:45:36 +0000 (0:00:01.844) 0:00:02.150 ****** 2026-01-01 00:48:11.285407 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285418 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285429 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285438 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285448 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285457 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285468 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285477 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285486 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285496 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285505 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285522 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285532 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285542 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285560 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-01 00:48:11.285570 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285595 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285606 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-01 00:48:11.285615 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285624 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285635 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-01 00:48:11.285648 | orchestrator | 2026-01-01 00:48:11.285663 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-01 00:48:11.285673 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:05.091) 0:00:07.242 ****** 2026-01-01 00:48:11.285682 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:48:11.285693 | orchestrator | 2026-01-01 00:48:11.285702 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-01 00:48:11.285712 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:01.225) 0:00:08.467 ****** 2026-01-01 00:48:11.285727 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.285852 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285935 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.285999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.286009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.286065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.286078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.286088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.286105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.286115 | orchestrator | 2026-01-01 00:48:11.286129 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-01 00:48:11.286139 | orchestrator | Thursday 01 January 2026 00:45:46 +0000 (0:00:04.003) 0:00:12.471 ****** 2026-01-01 00:48:11.286163 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.286174 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.286206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286233 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:48:11.286243 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:48:11.286253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.286268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.286312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286332 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:48:11.286342 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:48:11.286352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.286368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.286388 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:48:11.286399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.287846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.287897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.287913 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:11.287939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.287951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.287976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.287993 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:48:11.288003 | orchestrator | 2026-01-01 00:48:11.288014 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-01 00:48:11.288024 | orchestrator | Thursday 01 January 2026 00:45:47 +0000 (0:00:01.631) 0:00:14.103 ****** 2026-01-01 00:48:11.288037 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288085 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288100 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:48:11.288117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288160 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:48:11.288170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288216 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:48:11.288234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288301 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:48:11.288311 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:48:11.288319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288353 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:48:11.288361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-01 00:48:11.288370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.288391 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:11.288412 | orchestrator | 2026-01-01 00:48:11.288421 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-01 00:48:11.288429 | orchestrator | Thursday 01 January 2026 00:45:50 +0000 (0:00:02.847) 0:00:16.950 ****** 2026-01-01 00:48:11.288437 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:48:11.288445 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:48:11.288452 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:48:11.288460 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:48:11.288468 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:48:11.288476 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:48:11.288484 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:11.288492 | orchestrator | 2026-01-01 00:48:11.288508 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-01 00:48:11.288517 | orchestrator | Thursday 01 January 2026 00:45:51 +0000 (0:00:01.000) 0:00:17.951 ****** 2026-01-01 00:48:11.288525 | orchestrator | skipping: [testbed-manager] 2026-01-01 00:48:11.288533 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:48:11.288540 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:48:11.288548 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:48:11.288556 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:48:11.288563 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:48:11.288571 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:48:11.288579 | orchestrator | 2026-01-01 00:48:11.288587 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-01 00:48:11.288595 | orchestrator | Thursday 01 January 2026 00:45:53 +0000 (0:00:01.510) 0:00:19.461 ****** 2026-01-01 00:48:11.288609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288627 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288701 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.288788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288804 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.288879 | orchestrator | 2026-01-01 00:48:11.288887 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-01 00:48:11.288895 | orchestrator | Thursday 01 January 2026 00:46:01 +0000 (0:00:08.255) 0:00:27.717 ****** 2026-01-01 00:48:11.288904 | orchestrator | [WARNING]: Skipped 2026-01-01 00:48:11.288912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-01 00:48:11.288920 | orchestrator | to this access issue: 2026-01-01 00:48:11.288928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-01 00:48:11.288936 | orchestrator | directory 2026-01-01 00:48:11.288943 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:48:11.288951 | orchestrator | 2026-01-01 00:48:11.288959 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-01 00:48:11.288967 | orchestrator | Thursday 01 January 2026 00:46:04 +0000 (0:00:02.654) 0:00:30.371 ****** 2026-01-01 00:48:11.288974 | orchestrator | [WARNING]: Skipped 2026-01-01 00:48:11.288982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-01 00:48:11.288990 | orchestrator | to this access issue: 2026-01-01 00:48:11.288997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-01 00:48:11.289005 | orchestrator | directory 2026-01-01 00:48:11.289013 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:48:11.289020 | orchestrator | 2026-01-01 00:48:11.289028 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-01 00:48:11.289036 | orchestrator | Thursday 01 January 2026 00:46:05 +0000 (0:00:00.972) 0:00:31.344 ****** 2026-01-01 00:48:11.289044 | orchestrator | [WARNING]: Skipped 2026-01-01 00:48:11.289051 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-01 00:48:11.289059 | orchestrator | to this access issue: 2026-01-01 00:48:11.289067 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-01 00:48:11.289074 | orchestrator | directory 2026-01-01 00:48:11.289082 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:48:11.289090 | orchestrator | 2026-01-01 00:48:11.289097 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-01 00:48:11.289105 | orchestrator | Thursday 01 January 2026 00:46:06 +0000 (0:00:00.928) 0:00:32.273 ****** 2026-01-01 00:48:11.289113 | orchestrator | [WARNING]: Skipped 2026-01-01 00:48:11.289120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-01 00:48:11.289133 | orchestrator | to this access issue: 2026-01-01 00:48:11.289141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-01 00:48:11.289149 | orchestrator | directory 2026-01-01 00:48:11.289157 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 00:48:11.289165 | orchestrator | 2026-01-01 00:48:11.289173 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-01 00:48:11.289180 | orchestrator | Thursday 01 January 2026 00:46:07 +0000 (0:00:00.907) 0:00:33.180 ****** 2026-01-01 00:48:11.289188 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.289196 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.289204 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.289211 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.289219 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.289227 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.289234 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.289246 | orchestrator | 2026-01-01 00:48:11.289265 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-01 00:48:11.289280 | orchestrator | Thursday 01 January 2026 00:46:10 +0000 (0:00:03.558) 0:00:36.738 ****** 2026-01-01 00:48:11.289289 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289297 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289305 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289317 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289325 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289333 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289341 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-01 00:48:11.289351 | orchestrator | 2026-01-01 00:48:11.289364 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-01 00:48:11.289376 | orchestrator | Thursday 01 January 2026 00:46:14 +0000 (0:00:03.670) 0:00:40.409 ****** 2026-01-01 00:48:11.289389 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.289401 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.289413 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.289424 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.289435 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.289447 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.289458 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.289469 | orchestrator | 2026-01-01 00:48:11.289480 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-01 00:48:11.289491 | orchestrator | Thursday 01 January 2026 00:46:17 +0000 (0:00:03.558) 0:00:43.968 ****** 2026-01-01 00:48:11.289503 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289564 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289600 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289637 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289663 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289702 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289716 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289733 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.289744 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:48:11.289795 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289804 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289817 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.289830 | orchestrator | 2026-01-01 00:48:11.289844 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-01 00:48:11.289853 | orchestrator | Thursday 01 January 2026 00:46:19 +0000 (0:00:02.074) 0:00:46.043 ****** 2026-01-01 00:48:11.289861 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289889 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289898 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289906 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289914 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-01 00:48:11.289922 | orchestrator | 2026-01-01 00:48:11.289930 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-01 00:48:11.289938 | orchestrator | Thursday 01 January 2026 00:46:25 +0000 (0:00:05.400) 0:00:51.444 ****** 2026-01-01 00:48:11.289946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.289959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.289969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.289982 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.289990 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.289998 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.290005 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-01 00:48:11.290013 | orchestrator | 2026-01-01 00:48:11.290053 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-01 00:48:11.290062 | orchestrator | Thursday 01 January 2026 00:46:29 +0000 (0:00:04.429) 0:00:55.873 ****** 2026-01-01 00:48:11.290070 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290160 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-01 00:48:11.290194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290235 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:48:11.290319 | orchestrator | 2026-01-01 00:48:11.290344 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-01 00:48:11.290358 | orchestrator | Thursday 01 January 2026 00:46:33 +0000 (0:00:03.741) 0:00:59.615 ****** 2026-01-01 00:48:11.290369 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.290381 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.290393 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.290406 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.290419 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.290431 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.290444 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.290459 | orchestrator | 2026-01-01 00:48:11.290472 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-01 00:48:11.290485 | orchestrator | Thursday 01 January 2026 00:46:35 +0000 (0:00:02.120) 0:01:01.735 ****** 2026-01-01 00:48:11.290499 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.290512 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.290525 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.290534 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.290542 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.290550 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.290560 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.290572 | orchestrator | 2026-01-01 00:48:11.290586 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290599 | orchestrator | Thursday 01 January 2026 00:46:36 +0000 (0:00:01.335) 0:01:03.070 ****** 2026-01-01 00:48:11.290612 | orchestrator | 2026-01-01 00:48:11.290624 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290631 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.132) 0:01:03.203 ****** 2026-01-01 00:48:11.290639 | orchestrator | 2026-01-01 00:48:11.290647 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290655 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.067) 0:01:03.271 ****** 2026-01-01 00:48:11.290663 | orchestrator | 2026-01-01 00:48:11.290671 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290679 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.242) 0:01:03.513 ****** 2026-01-01 00:48:11.290686 | orchestrator | 2026-01-01 00:48:11.290694 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290702 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.066) 0:01:03.580 ****** 2026-01-01 00:48:11.290710 | orchestrator | 2026-01-01 00:48:11.290718 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290725 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.078) 0:01:03.658 ****** 2026-01-01 00:48:11.290733 | orchestrator | 2026-01-01 00:48:11.290741 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-01 00:48:11.290802 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.072) 0:01:03.730 ****** 2026-01-01 00:48:11.290811 | orchestrator | 2026-01-01 00:48:11.290819 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-01 00:48:11.290830 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:00.103) 0:01:03.834 ****** 2026-01-01 00:48:11.290844 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.290857 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.290870 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.290880 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.290891 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.290899 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.290905 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.290912 | orchestrator | 2026-01-01 00:48:11.290918 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-01 00:48:11.290925 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:35.006) 0:01:38.840 ****** 2026-01-01 00:48:11.290938 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.290945 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.290951 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.290958 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.290964 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.290971 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.290977 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.290983 | orchestrator | 2026-01-01 00:48:11.290992 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-01 00:48:11.291003 | orchestrator | Thursday 01 January 2026 00:47:56 +0000 (0:00:43.626) 0:02:22.466 ****** 2026-01-01 00:48:11.291014 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:48:11.291025 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:48:11.291035 | orchestrator | ok: [testbed-manager] 2026-01-01 00:48:11.291045 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:48:11.291052 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:48:11.291059 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:48:11.291065 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:48:11.291072 | orchestrator | 2026-01-01 00:48:11.291078 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-01 00:48:11.291085 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:02.540) 0:02:25.007 ****** 2026-01-01 00:48:11.291091 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:48:11.291098 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:11.291104 | orchestrator | changed: [testbed-manager] 2026-01-01 00:48:11.291111 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:11.291117 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:11.291124 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:48:11.291130 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:48:11.291137 | orchestrator | 2026-01-01 00:48:11.291148 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:48:11.291156 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291162 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291175 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291182 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291189 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291196 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291202 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-01 00:48:11.291209 | orchestrator | 2026-01-01 00:48:11.291215 | orchestrator | 2026-01-01 00:48:11.291222 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:48:11.291229 | orchestrator | Thursday 01 January 2026 00:48:08 +0000 (0:00:09.924) 0:02:34.932 ****** 2026-01-01 00:48:11.291235 | orchestrator | =============================================================================== 2026-01-01 00:48:11.291242 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.63s 2026-01-01 00:48:11.291248 | orchestrator | common : Restart fluentd container ------------------------------------- 35.01s 2026-01-01 00:48:11.291255 | orchestrator | common : Restart cron container ----------------------------------------- 9.92s 2026-01-01 00:48:11.291261 | orchestrator | common : Copying over config.json files for services -------------------- 8.26s 2026-01-01 00:48:11.291272 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 5.40s 2026-01-01 00:48:11.291279 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.09s 2026-01-01 00:48:11.291285 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.43s 2026-01-01 00:48:11.291292 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.00s 2026-01-01 00:48:11.291298 | orchestrator | common : Check common containers ---------------------------------------- 3.74s 2026-01-01 00:48:11.291305 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.67s 2026-01-01 00:48:11.291311 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.56s 2026-01-01 00:48:11.291318 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.56s 2026-01-01 00:48:11.291328 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.85s 2026-01-01 00:48:11.291340 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.65s 2026-01-01 00:48:11.291351 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.54s 2026-01-01 00:48:11.291363 | orchestrator | common : Creating log volume -------------------------------------------- 2.12s 2026-01-01 00:48:11.291372 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.07s 2026-01-01 00:48:11.291379 | orchestrator | common : include_tasks -------------------------------------------------- 1.84s 2026-01-01 00:48:11.291385 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.63s 2026-01-01 00:48:11.291392 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.51s 2026-01-01 00:48:11.291399 | orchestrator | 2026-01-01 00:48:11 | INFO  | Task 15443760-2158-4506-8ab2-e43079ec9833 is in state SUCCESS 2026-01-01 00:48:11.291405 | orchestrator | 2026-01-01 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:14.329817 | orchestrator | 2026-01-01 00:48:14 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:14.330529 | orchestrator | 2026-01-01 00:48:14 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:14.333585 | orchestrator | 2026-01-01 00:48:14 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:14.334118 | orchestrator | 2026-01-01 00:48:14 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:14.336846 | orchestrator | 2026-01-01 00:48:14 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:14.337672 | orchestrator | 2026-01-01 00:48:14 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:14.337724 | orchestrator | 2026-01-01 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:17.377179 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:17.378216 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:17.379092 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:17.380130 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:17.385597 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:17.385609 | orchestrator | 2026-01-01 00:48:17 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:17.385614 | orchestrator | 2026-01-01 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:20.441251 | orchestrator | 2026-01-01 00:48:20 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:20.445472 | orchestrator | 2026-01-01 00:48:20 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:20.451316 | orchestrator | 2026-01-01 00:48:20 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:20.453126 | orchestrator | 2026-01-01 00:48:20 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:20.461641 | orchestrator | 2026-01-01 00:48:20 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:20.464824 | orchestrator | 2026-01-01 00:48:20 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:20.464941 | orchestrator | 2026-01-01 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:23.544930 | orchestrator | 2026-01-01 00:48:23 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:23.545238 | orchestrator | 2026-01-01 00:48:23 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:23.545733 | orchestrator | 2026-01-01 00:48:23 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:23.546526 | orchestrator | 2026-01-01 00:48:23 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:23.547360 | orchestrator | 2026-01-01 00:48:23 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:23.548169 | orchestrator | 2026-01-01 00:48:23 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:23.548200 | orchestrator | 2026-01-01 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:26.612491 | orchestrator | 2026-01-01 00:48:26 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:26.612819 | orchestrator | 2026-01-01 00:48:26 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:26.613730 | orchestrator | 2026-01-01 00:48:26 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:26.614227 | orchestrator | 2026-01-01 00:48:26 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:26.614830 | orchestrator | 2026-01-01 00:48:26 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:26.615779 | orchestrator | 2026-01-01 00:48:26 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:26.615851 | orchestrator | 2026-01-01 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:29.646885 | orchestrator | 2026-01-01 00:48:29 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:29.647018 | orchestrator | 2026-01-01 00:48:29 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:29.647035 | orchestrator | 2026-01-01 00:48:29 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:29.647048 | orchestrator | 2026-01-01 00:48:29 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:29.647059 | orchestrator | 2026-01-01 00:48:29 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:29.649011 | orchestrator | 2026-01-01 00:48:29 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:29.649809 | orchestrator | 2026-01-01 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:32.689213 | orchestrator | 2026-01-01 00:48:32 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:32.690336 | orchestrator | 2026-01-01 00:48:32 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:32.691120 | orchestrator | 2026-01-01 00:48:32 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:32.693591 | orchestrator | 2026-01-01 00:48:32 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:32.696426 | orchestrator | 2026-01-01 00:48:32 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:32.697849 | orchestrator | 2026-01-01 00:48:32 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:32.697890 | orchestrator | 2026-01-01 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:35.770691 | orchestrator | 2026-01-01 00:48:35 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:35.773799 | orchestrator | 2026-01-01 00:48:35 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:35.777205 | orchestrator | 2026-01-01 00:48:35 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state STARTED 2026-01-01 00:48:35.780027 | orchestrator | 2026-01-01 00:48:35 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:35.782096 | orchestrator | 2026-01-01 00:48:35 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:35.784259 | orchestrator | 2026-01-01 00:48:35 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:35.784298 | orchestrator | 2026-01-01 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:38.844048 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:38.850071 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:38.850485 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task f7f0c5ee-02c7-4978-9549-f08b09ae8d04 is in state SUCCESS 2026-01-01 00:48:38.891036 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:38.893559 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:38.893644 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:38.894565 | orchestrator | 2026-01-01 00:48:38 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:38.894598 | orchestrator | 2026-01-01 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:41.952737 | orchestrator | 2026-01-01 00:48:41 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:41.952966 | orchestrator | 2026-01-01 00:48:41 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:41.953663 | orchestrator | 2026-01-01 00:48:41 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:41.954920 | orchestrator | 2026-01-01 00:48:41 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:41.955415 | orchestrator | 2026-01-01 00:48:41 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:41.957073 | orchestrator | 2026-01-01 00:48:41 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:41.957134 | orchestrator | 2026-01-01 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:45.024971 | orchestrator | 2026-01-01 00:48:45 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:45.025120 | orchestrator | 2026-01-01 00:48:45 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:45.026305 | orchestrator | 2026-01-01 00:48:45 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:45.027383 | orchestrator | 2026-01-01 00:48:45 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:45.028295 | orchestrator | 2026-01-01 00:48:45 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:45.029426 | orchestrator | 2026-01-01 00:48:45 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:45.029453 | orchestrator | 2026-01-01 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:48.084946 | orchestrator | 2026-01-01 00:48:48 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state STARTED 2026-01-01 00:48:48.085123 | orchestrator | 2026-01-01 00:48:48 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:48.085476 | orchestrator | 2026-01-01 00:48:48 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:48.086311 | orchestrator | 2026-01-01 00:48:48 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:48.087376 | orchestrator | 2026-01-01 00:48:48 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:48.087741 | orchestrator | 2026-01-01 00:48:48 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:48.087856 | orchestrator | 2026-01-01 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:51.139151 | orchestrator | 2026-01-01 00:48:51 | INFO  | Task fff6579d-f951-4e93-be07-b80752540cdd is in state SUCCESS 2026-01-01 00:48:51.140060 | orchestrator | 2026-01-01 00:48:51.140095 | orchestrator | 2026-01-01 00:48:51.140103 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:48:51.140112 | orchestrator | 2026-01-01 00:48:51.140137 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:48:51.140146 | orchestrator | Thursday 01 January 2026 00:48:19 +0000 (0:00:00.686) 0:00:00.686 ****** 2026-01-01 00:48:51.140153 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:48:51.140162 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:48:51.140170 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:48:51.140177 | orchestrator | 2026-01-01 00:48:51.140185 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:48:51.140192 | orchestrator | Thursday 01 January 2026 00:48:20 +0000 (0:00:00.793) 0:00:01.479 ****** 2026-01-01 00:48:51.140223 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-01 00:48:51.140232 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-01 00:48:51.140240 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-01 00:48:51.140248 | orchestrator | 2026-01-01 00:48:51.140256 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-01 00:48:51.140263 | orchestrator | 2026-01-01 00:48:51.140270 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-01 00:48:51.140277 | orchestrator | Thursday 01 January 2026 00:48:21 +0000 (0:00:01.227) 0:00:02.714 ****** 2026-01-01 00:48:51.140285 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:48:51.140294 | orchestrator | 2026-01-01 00:48:51.140301 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-01 00:48:51.140330 | orchestrator | Thursday 01 January 2026 00:48:22 +0000 (0:00:01.566) 0:00:04.280 ****** 2026-01-01 00:48:51.140338 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-01 00:48:51.140346 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-01 00:48:51.140353 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-01 00:48:51.140360 | orchestrator | 2026-01-01 00:48:51.140368 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-01 00:48:51.140376 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:01.170) 0:00:05.451 ****** 2026-01-01 00:48:51.140384 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-01 00:48:51.140391 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-01 00:48:51.140399 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-01 00:48:51.140407 | orchestrator | 2026-01-01 00:48:51.140415 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-01 00:48:51.140422 | orchestrator | Thursday 01 January 2026 00:48:27 +0000 (0:00:03.006) 0:00:08.457 ****** 2026-01-01 00:48:51.140430 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:51.140438 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:51.140445 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:51.140452 | orchestrator | 2026-01-01 00:48:51.140460 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-01 00:48:51.140467 | orchestrator | Thursday 01 January 2026 00:48:29 +0000 (0:00:02.389) 0:00:10.847 ****** 2026-01-01 00:48:51.140475 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:51.140482 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:51.140490 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:51.140497 | orchestrator | 2026-01-01 00:48:51.140505 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:48:51.140513 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:51.140522 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:51.140529 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:51.140537 | orchestrator | 2026-01-01 00:48:51.140544 | orchestrator | 2026-01-01 00:48:51.140552 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:48:51.140560 | orchestrator | Thursday 01 January 2026 00:48:36 +0000 (0:00:07.367) 0:00:18.215 ****** 2026-01-01 00:48:51.140567 | orchestrator | =============================================================================== 2026-01-01 00:48:51.140575 | orchestrator | memcached : Restart memcached container --------------------------------- 7.37s 2026-01-01 00:48:51.140587 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.01s 2026-01-01 00:48:51.140594 | orchestrator | memcached : Check memcached container ----------------------------------- 2.39s 2026-01-01 00:48:51.140602 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.57s 2026-01-01 00:48:51.140609 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.23s 2026-01-01 00:48:51.140617 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.17s 2026-01-01 00:48:51.140624 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2026-01-01 00:48:51.140632 | orchestrator | 2026-01-01 00:48:51.140640 | orchestrator | 2026-01-01 00:48:51.140647 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:48:51.140655 | orchestrator | 2026-01-01 00:48:51.140662 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:48:51.140671 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.702) 0:00:00.702 ****** 2026-01-01 00:48:51.140678 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:48:51.140691 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:48:51.140698 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:48:51.140707 | orchestrator | 2026-01-01 00:48:51.140715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:48:51.140734 | orchestrator | Thursday 01 January 2026 00:48:19 +0000 (0:00:00.559) 0:00:01.261 ****** 2026-01-01 00:48:51.140743 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-01 00:48:51.140779 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-01 00:48:51.140786 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-01 00:48:51.140794 | orchestrator | 2026-01-01 00:48:51.140801 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-01 00:48:51.140807 | orchestrator | 2026-01-01 00:48:51.140815 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-01 00:48:51.140821 | orchestrator | Thursday 01 January 2026 00:48:20 +0000 (0:00:00.929) 0:00:02.191 ****** 2026-01-01 00:48:51.140828 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:48:51.140835 | orchestrator | 2026-01-01 00:48:51.140842 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-01 00:48:51.140850 | orchestrator | Thursday 01 January 2026 00:48:21 +0000 (0:00:01.772) 0:00:03.963 ****** 2026-01-01 00:48:51.140858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140924 | orchestrator | 2026-01-01 00:48:51.140931 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-01 00:48:51.140938 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:01.668) 0:00:05.632 ****** 2026-01-01 00:48:51.140945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.140995 | orchestrator | 2026-01-01 00:48:51.141001 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-01 00:48:51.141008 | orchestrator | Thursday 01 January 2026 00:48:28 +0000 (0:00:04.418) 0:00:10.051 ****** 2026-01-01 00:48:51.141014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141062 | orchestrator | 2026-01-01 00:48:51.141072 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-01 00:48:51.141078 | orchestrator | Thursday 01 January 2026 00:48:31 +0000 (0:00:03.225) 0:00:13.276 ****** 2026-01-01 00:48:51.141085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-01 00:48:51.141157 | orchestrator | 2026-01-01 00:48:51.141163 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-01 00:48:51.141169 | orchestrator | Thursday 01 January 2026 00:48:33 +0000 (0:00:01.904) 0:00:15.181 ****** 2026-01-01 00:48:51.141174 | orchestrator | 2026-01-01 00:48:51.141178 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-01 00:48:51.141186 | orchestrator | Thursday 01 January 2026 00:48:33 +0000 (0:00:00.121) 0:00:15.302 ****** 2026-01-01 00:48:51.141191 | orchestrator | 2026-01-01 00:48:51.141195 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-01 00:48:51.141199 | orchestrator | Thursday 01 January 2026 00:48:33 +0000 (0:00:00.128) 0:00:15.430 ****** 2026-01-01 00:48:51.141203 | orchestrator | 2026-01-01 00:48:51.141207 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-01 00:48:51.141211 | orchestrator | Thursday 01 January 2026 00:48:33 +0000 (0:00:00.133) 0:00:15.564 ****** 2026-01-01 00:48:51.141215 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:51.141219 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:51.141223 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:51.141227 | orchestrator | 2026-01-01 00:48:51.141231 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-01 00:48:51.141235 | orchestrator | Thursday 01 January 2026 00:48:42 +0000 (0:00:08.720) 0:00:24.284 ****** 2026-01-01 00:48:51.141239 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:48:51.141244 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:48:51.141250 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:48:51.141257 | orchestrator | 2026-01-01 00:48:51.141263 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:48:51.141269 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:51.141274 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:51.141278 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:48:51.141282 | orchestrator | 2026-01-01 00:48:51.141287 | orchestrator | 2026-01-01 00:48:51.141291 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:48:51.141295 | orchestrator | Thursday 01 January 2026 00:48:49 +0000 (0:00:07.639) 0:00:31.924 ****** 2026-01-01 00:48:51.141299 | orchestrator | =============================================================================== 2026-01-01 00:48:51.141307 | orchestrator | redis : Restart redis container ----------------------------------------- 8.72s 2026-01-01 00:48:51.141311 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.64s 2026-01-01 00:48:51.141315 | orchestrator | redis : Copying over default config.json files -------------------------- 4.42s 2026-01-01 00:48:51.141319 | orchestrator | redis : Copying over redis config files --------------------------------- 3.23s 2026-01-01 00:48:51.141323 | orchestrator | redis : Check redis containers ------------------------------------------ 1.90s 2026-01-01 00:48:51.141327 | orchestrator | redis : include_tasks --------------------------------------------------- 1.77s 2026-01-01 00:48:51.141331 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.67s 2026-01-01 00:48:51.141335 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-01-01 00:48:51.141339 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.56s 2026-01-01 00:48:51.141343 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.38s 2026-01-01 00:48:51.141425 | orchestrator | 2026-01-01 00:48:51 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:51.141663 | orchestrator | 2026-01-01 00:48:51 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:51.144993 | orchestrator | 2026-01-01 00:48:51 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:51.147589 | orchestrator | 2026-01-01 00:48:51 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:51.149877 | orchestrator | 2026-01-01 00:48:51 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:51.150011 | orchestrator | 2026-01-01 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:54.215477 | orchestrator | 2026-01-01 00:48:54 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:54.217627 | orchestrator | 2026-01-01 00:48:54 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:54.218376 | orchestrator | 2026-01-01 00:48:54 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:54.220360 | orchestrator | 2026-01-01 00:48:54 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:54.220970 | orchestrator | 2026-01-01 00:48:54 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:54.221365 | orchestrator | 2026-01-01 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:48:57.281139 | orchestrator | 2026-01-01 00:48:57 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:48:57.282972 | orchestrator | 2026-01-01 00:48:57 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:48:57.291995 | orchestrator | 2026-01-01 00:48:57 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:48:57.301212 | orchestrator | 2026-01-01 00:48:57 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:48:57.389231 | orchestrator | 2026-01-01 00:48:57 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:48:57.389348 | orchestrator | 2026-01-01 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:00.462316 | orchestrator | 2026-01-01 00:49:00 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:00.467514 | orchestrator | 2026-01-01 00:49:00 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:00.471713 | orchestrator | 2026-01-01 00:49:00 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:00.474411 | orchestrator | 2026-01-01 00:49:00 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:00.475554 | orchestrator | 2026-01-01 00:49:00 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:00.475650 | orchestrator | 2026-01-01 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:03.541350 | orchestrator | 2026-01-01 00:49:03 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:03.542403 | orchestrator | 2026-01-01 00:49:03 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:03.543832 | orchestrator | 2026-01-01 00:49:03 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:03.545557 | orchestrator | 2026-01-01 00:49:03 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:03.546092 | orchestrator | 2026-01-01 00:49:03 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:03.546260 | orchestrator | 2026-01-01 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:06.571815 | orchestrator | 2026-01-01 00:49:06 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:06.575242 | orchestrator | 2026-01-01 00:49:06 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:06.575419 | orchestrator | 2026-01-01 00:49:06 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:06.576019 | orchestrator | 2026-01-01 00:49:06 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:06.576646 | orchestrator | 2026-01-01 00:49:06 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:06.576673 | orchestrator | 2026-01-01 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:09.612552 | orchestrator | 2026-01-01 00:49:09 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:09.612662 | orchestrator | 2026-01-01 00:49:09 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:09.613038 | orchestrator | 2026-01-01 00:49:09 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:09.613928 | orchestrator | 2026-01-01 00:49:09 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:09.615214 | orchestrator | 2026-01-01 00:49:09 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:09.615336 | orchestrator | 2026-01-01 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:12.647453 | orchestrator | 2026-01-01 00:49:12 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:12.647618 | orchestrator | 2026-01-01 00:49:12 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:12.648462 | orchestrator | 2026-01-01 00:49:12 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:12.649241 | orchestrator | 2026-01-01 00:49:12 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:12.649826 | orchestrator | 2026-01-01 00:49:12 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:12.649858 | orchestrator | 2026-01-01 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:15.683856 | orchestrator | 2026-01-01 00:49:15 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:15.685573 | orchestrator | 2026-01-01 00:49:15 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:15.685600 | orchestrator | 2026-01-01 00:49:15 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:15.686387 | orchestrator | 2026-01-01 00:49:15 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:15.688871 | orchestrator | 2026-01-01 00:49:15 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:15.688929 | orchestrator | 2026-01-01 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:18.728621 | orchestrator | 2026-01-01 00:49:18 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:18.730398 | orchestrator | 2026-01-01 00:49:18 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:18.732488 | orchestrator | 2026-01-01 00:49:18 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:18.734655 | orchestrator | 2026-01-01 00:49:18 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:18.736522 | orchestrator | 2026-01-01 00:49:18 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:18.737165 | orchestrator | 2026-01-01 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:21.823391 | orchestrator | 2026-01-01 00:49:21 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:21.825246 | orchestrator | 2026-01-01 00:49:21 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:21.826124 | orchestrator | 2026-01-01 00:49:21 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:21.829350 | orchestrator | 2026-01-01 00:49:21 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:21.829588 | orchestrator | 2026-01-01 00:49:21 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:21.829666 | orchestrator | 2026-01-01 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:24.886188 | orchestrator | 2026-01-01 00:49:24 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:24.886920 | orchestrator | 2026-01-01 00:49:24 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:24.887915 | orchestrator | 2026-01-01 00:49:24 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:24.890148 | orchestrator | 2026-01-01 00:49:24 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:24.890186 | orchestrator | 2026-01-01 00:49:24 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:24.890198 | orchestrator | 2026-01-01 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:27.958930 | orchestrator | 2026-01-01 00:49:27 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:27.961506 | orchestrator | 2026-01-01 00:49:27 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:27.961568 | orchestrator | 2026-01-01 00:49:27 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:27.961585 | orchestrator | 2026-01-01 00:49:27 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:27.963276 | orchestrator | 2026-01-01 00:49:27 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:27.963422 | orchestrator | 2026-01-01 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:30.993948 | orchestrator | 2026-01-01 00:49:30 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:30.994221 | orchestrator | 2026-01-01 00:49:30 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state STARTED 2026-01-01 00:49:30.994776 | orchestrator | 2026-01-01 00:49:30 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:30.995515 | orchestrator | 2026-01-01 00:49:30 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:30.997030 | orchestrator | 2026-01-01 00:49:30 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:30.997086 | orchestrator | 2026-01-01 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:34.031516 | orchestrator | 2026-01-01 00:49:34 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:34.035826 | orchestrator | 2026-01-01 00:49:34 | INFO  | Task c754cb7e-7269-4bb3-b4bd-509810780799 is in state SUCCESS 2026-01-01 00:49:34.038594 | orchestrator | 2026-01-01 00:49:34.039552 | orchestrator | 2026-01-01 00:49:34.039605 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:49:34.039621 | orchestrator | 2026-01-01 00:49:34.039632 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:49:34.039644 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.680) 0:00:00.680 ****** 2026-01-01 00:49:34.039655 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:49:34.039668 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:49:34.039678 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:49:34.039691 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:49:34.039702 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:49:34.039714 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:49:34.039725 | orchestrator | 2026-01-01 00:49:34.039735 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:49:34.039766 | orchestrator | Thursday 01 January 2026 00:48:20 +0000 (0:00:01.860) 0:00:02.541 ****** 2026-01-01 00:49:34.039780 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:49:34.039793 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:49:34.039804 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:49:34.039816 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:49:34.039828 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:49:34.039839 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-01 00:49:34.039851 | orchestrator | 2026-01-01 00:49:34.039863 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-01 00:49:34.039872 | orchestrator | 2026-01-01 00:49:34.039893 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-01 00:49:34.039901 | orchestrator | Thursday 01 January 2026 00:48:21 +0000 (0:00:01.684) 0:00:04.225 ****** 2026-01-01 00:49:34.039910 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:49:34.039918 | orchestrator | 2026-01-01 00:49:34.039926 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-01 00:49:34.039933 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:01.992) 0:00:06.218 ****** 2026-01-01 00:49:34.039940 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-01 00:49:34.039947 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-01 00:49:34.039954 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-01 00:49:34.039986 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-01 00:49:34.039993 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-01 00:49:34.039999 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-01 00:49:34.040006 | orchestrator | 2026-01-01 00:49:34.040013 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-01 00:49:34.040020 | orchestrator | Thursday 01 January 2026 00:48:26 +0000 (0:00:02.350) 0:00:08.568 ****** 2026-01-01 00:49:34.040027 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-01 00:49:34.040034 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-01 00:49:34.040040 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-01 00:49:34.040047 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-01 00:49:34.040053 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-01 00:49:34.040060 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-01 00:49:34.040066 | orchestrator | 2026-01-01 00:49:34.040073 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-01 00:49:34.040080 | orchestrator | Thursday 01 January 2026 00:48:28 +0000 (0:00:02.511) 0:00:11.079 ****** 2026-01-01 00:49:34.040087 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-01 00:49:34.040093 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:49:34.040101 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-01 00:49:34.040107 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-01 00:49:34.040114 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:49:34.040120 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-01 00:49:34.040127 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:49:34.040146 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-01 00:49:34.040153 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:49:34.040159 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:49:34.040166 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-01 00:49:34.040172 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:49:34.040179 | orchestrator | 2026-01-01 00:49:34.040185 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-01 00:49:34.040192 | orchestrator | Thursday 01 January 2026 00:48:30 +0000 (0:00:01.897) 0:00:12.977 ****** 2026-01-01 00:49:34.040199 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:49:34.040205 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:49:34.040212 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:49:34.040218 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:49:34.040225 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:49:34.040232 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:49:34.040238 | orchestrator | 2026-01-01 00:49:34.040245 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-01 00:49:34.040251 | orchestrator | Thursday 01 January 2026 00:48:31 +0000 (0:00:00.760) 0:00:13.738 ****** 2026-01-01 00:49:34.040278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040449 | orchestrator | 2026-01-01 00:49:34.040456 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-01 00:49:34.040462 | orchestrator | Thursday 01 January 2026 00:48:33 +0000 (0:00:01.913) 0:00:15.651 ****** 2026-01-01 00:49:34.040473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040710 | orchestrator | 2026-01-01 00:49:34.040722 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-01 00:49:34.040733 | orchestrator | Thursday 01 January 2026 00:48:36 +0000 (0:00:03.267) 0:00:18.918 ****** 2026-01-01 00:49:34.040743 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:49:34.040775 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:49:34.040785 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:49:34.040794 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:49:34.040804 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:49:34.040815 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:49:34.040825 | orchestrator | 2026-01-01 00:49:34.040835 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-01 00:49:34.040845 | orchestrator | Thursday 01 January 2026 00:48:37 +0000 (0:00:01.582) 0:00:20.501 ****** 2026-01-01 00:49:34.040856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.040994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.041048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.041061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-01 00:49:34.041073 | orchestrator | 2026-01-01 00:49:34.041083 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:49:34.041093 | orchestrator | Thursday 01 January 2026 00:48:40 +0000 (0:00:02.563) 0:00:23.064 ****** 2026-01-01 00:49:34.041103 | orchestrator | 2026-01-01 00:49:34.041112 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:49:34.041118 | orchestrator | Thursday 01 January 2026 00:48:41 +0000 (0:00:00.607) 0:00:23.672 ****** 2026-01-01 00:49:34.041124 | orchestrator | 2026-01-01 00:49:34.041130 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:49:34.041136 | orchestrator | Thursday 01 January 2026 00:48:41 +0000 (0:00:00.494) 0:00:24.167 ****** 2026-01-01 00:49:34.041142 | orchestrator | 2026-01-01 00:49:34.041149 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:49:34.041155 | orchestrator | Thursday 01 January 2026 00:48:41 +0000 (0:00:00.271) 0:00:24.438 ****** 2026-01-01 00:49:34.041165 | orchestrator | 2026-01-01 00:49:34.041175 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:49:34.041186 | orchestrator | Thursday 01 January 2026 00:48:42 +0000 (0:00:00.147) 0:00:24.586 ****** 2026-01-01 00:49:34.041195 | orchestrator | 2026-01-01 00:49:34.041206 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-01 00:49:34.041217 | orchestrator | Thursday 01 January 2026 00:48:42 +0000 (0:00:00.237) 0:00:24.823 ****** 2026-01-01 00:49:34.041227 | orchestrator | 2026-01-01 00:49:34.041237 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-01 00:49:34.041248 | orchestrator | Thursday 01 January 2026 00:48:42 +0000 (0:00:00.439) 0:00:25.263 ****** 2026-01-01 00:49:34.041258 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:49:34.041268 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:49:34.041279 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:49:34.041289 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:49:34.041299 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:49:34.041310 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:49:34.041328 | orchestrator | 2026-01-01 00:49:34.041339 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-01 00:49:34.041350 | orchestrator | Thursday 01 January 2026 00:48:55 +0000 (0:00:13.179) 0:00:38.442 ****** 2026-01-01 00:49:34.041361 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:49:34.041372 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:49:34.041382 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:49:34.041392 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:49:34.041402 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:49:34.041413 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:49:34.041423 | orchestrator | 2026-01-01 00:49:34.041433 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-01 00:49:34.041444 | orchestrator | Thursday 01 January 2026 00:48:58 +0000 (0:00:02.095) 0:00:40.538 ****** 2026-01-01 00:49:34.041456 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:49:34.041462 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:49:34.041468 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:49:34.041474 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:49:34.041481 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:49:34.041487 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:49:34.041493 | orchestrator | 2026-01-01 00:49:34.041499 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-01 00:49:34.041505 | orchestrator | Thursday 01 January 2026 00:49:09 +0000 (0:00:11.783) 0:00:52.322 ****** 2026-01-01 00:49:34.041512 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-01 00:49:34.041519 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-01 00:49:34.041525 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-01 00:49:34.041531 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-01 00:49:34.041538 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-01 00:49:34.041562 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-01 00:49:34.041569 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-01 00:49:34.041576 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-01 00:49:34.041582 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-01 00:49:34.041589 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-01 00:49:34.041595 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-01 00:49:34.041601 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-01 00:49:34.041607 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:49:34.041613 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:49:34.041619 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:49:34.041625 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:49:34.041631 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:49:34.041638 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-01 00:49:34.041650 | orchestrator | 2026-01-01 00:49:34.041656 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-01 00:49:34.041662 | orchestrator | Thursday 01 January 2026 00:49:17 +0000 (0:00:07.237) 0:00:59.560 ****** 2026-01-01 00:49:34.041669 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-01 00:49:34.041675 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:49:34.041681 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-01 00:49:34.041687 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:49:34.041693 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-01 00:49:34.041700 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:49:34.041706 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-01 00:49:34.041712 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-01 00:49:34.041719 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-01 00:49:34.041725 | orchestrator | 2026-01-01 00:49:34.041731 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-01 00:49:34.041737 | orchestrator | Thursday 01 January 2026 00:49:19 +0000 (0:00:02.833) 0:01:02.393 ****** 2026-01-01 00:49:34.041743 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-01 00:49:34.041767 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-01 00:49:34.041774 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:49:34.041780 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:49:34.041787 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-01 00:49:34.041793 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:49:34.041799 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-01 00:49:34.041805 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-01 00:49:34.041811 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-01 00:49:34.041817 | orchestrator | 2026-01-01 00:49:34.041824 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-01 00:49:34.041832 | orchestrator | Thursday 01 January 2026 00:49:23 +0000 (0:00:03.847) 0:01:06.240 ****** 2026-01-01 00:49:34.041842 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:49:34.041852 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:49:34.041863 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:49:34.041877 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:49:34.041888 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:49:34.041898 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:49:34.041909 | orchestrator | 2026-01-01 00:49:34.041920 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:49:34.041931 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:49:34.041942 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:49:34.041953 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:49:34.041964 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-01 00:49:34.041974 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-01 00:49:34.041990 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-01 00:49:34.042002 | orchestrator | 2026-01-01 00:49:34.042060 | orchestrator | 2026-01-01 00:49:34.042081 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:49:34.042087 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:08.674) 0:01:14.914 ****** 2026-01-01 00:49:34.042094 | orchestrator | =============================================================================== 2026-01-01 00:49:34.042100 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.46s 2026-01-01 00:49:34.042106 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 13.18s 2026-01-01 00:49:34.042112 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.24s 2026-01-01 00:49:34.042118 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.85s 2026-01-01 00:49:34.042125 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.27s 2026-01-01 00:49:34.042131 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.83s 2026-01-01 00:49:34.042137 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.56s 2026-01-01 00:49:34.042143 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.51s 2026-01-01 00:49:34.042150 | orchestrator | module-load : Load modules ---------------------------------------------- 2.35s 2026-01-01 00:49:34.042156 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.20s 2026-01-01 00:49:34.042162 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.10s 2026-01-01 00:49:34.042168 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.99s 2026-01-01 00:49:34.042174 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.91s 2026-01-01 00:49:34.042180 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.90s 2026-01-01 00:49:34.042186 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.86s 2026-01-01 00:49:34.042193 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.68s 2026-01-01 00:49:34.042204 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.58s 2026-01-01 00:49:34.042215 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.76s 2026-01-01 00:49:34.042438 | orchestrator | 2026-01-01 00:49:34 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:34.042511 | orchestrator | 2026-01-01 00:49:34 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:34.043999 | orchestrator | 2026-01-01 00:49:34 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:34.046576 | orchestrator | 2026-01-01 00:49:34 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:34.046630 | orchestrator | 2026-01-01 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:37.093860 | orchestrator | 2026-01-01 00:49:37 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:37.096893 | orchestrator | 2026-01-01 00:49:37 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:37.098884 | orchestrator | 2026-01-01 00:49:37 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:37.101176 | orchestrator | 2026-01-01 00:49:37 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:37.102899 | orchestrator | 2026-01-01 00:49:37 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:37.103044 | orchestrator | 2026-01-01 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:40.155025 | orchestrator | 2026-01-01 00:49:40 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:40.156607 | orchestrator | 2026-01-01 00:49:40 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:40.157568 | orchestrator | 2026-01-01 00:49:40 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:40.160055 | orchestrator | 2026-01-01 00:49:40 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:40.160496 | orchestrator | 2026-01-01 00:49:40 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:40.160797 | orchestrator | 2026-01-01 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:43.223959 | orchestrator | 2026-01-01 00:49:43 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:43.224070 | orchestrator | 2026-01-01 00:49:43 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:43.224086 | orchestrator | 2026-01-01 00:49:43 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:43.224098 | orchestrator | 2026-01-01 00:49:43 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:43.224109 | orchestrator | 2026-01-01 00:49:43 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:43.224121 | orchestrator | 2026-01-01 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:46.238542 | orchestrator | 2026-01-01 00:49:46 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:46.238657 | orchestrator | 2026-01-01 00:49:46 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:46.240543 | orchestrator | 2026-01-01 00:49:46 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:46.242002 | orchestrator | 2026-01-01 00:49:46 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:46.242764 | orchestrator | 2026-01-01 00:49:46 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:46.242839 | orchestrator | 2026-01-01 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:49.292059 | orchestrator | 2026-01-01 00:49:49 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:49.293443 | orchestrator | 2026-01-01 00:49:49 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:49.295018 | orchestrator | 2026-01-01 00:49:49 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:49.296451 | orchestrator | 2026-01-01 00:49:49 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:49.297776 | orchestrator | 2026-01-01 00:49:49 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:49.297854 | orchestrator | 2026-01-01 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:52.492922 | orchestrator | 2026-01-01 00:49:52 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:52.493492 | orchestrator | 2026-01-01 00:49:52 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:52.495796 | orchestrator | 2026-01-01 00:49:52 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:52.498439 | orchestrator | 2026-01-01 00:49:52 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:52.499934 | orchestrator | 2026-01-01 00:49:52 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:52.500312 | orchestrator | 2026-01-01 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:55.550829 | orchestrator | 2026-01-01 00:49:55 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:55.551613 | orchestrator | 2026-01-01 00:49:55 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:55.553536 | orchestrator | 2026-01-01 00:49:55 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:55.556620 | orchestrator | 2026-01-01 00:49:55 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:55.557967 | orchestrator | 2026-01-01 00:49:55 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:55.558389 | orchestrator | 2026-01-01 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:49:58.657683 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:49:58.663336 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:49:58.705548 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:49:58.730722 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:49:58.753213 | orchestrator | 2026-01-01 00:49:58 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:49:58.753324 | orchestrator | 2026-01-01 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:01.875180 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:01.877100 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:50:01.882178 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:01.884425 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:01.889822 | orchestrator | 2026-01-01 00:50:01 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:01.889915 | orchestrator | 2026-01-01 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:04.966945 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:04.968244 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:50:04.969734 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:04.970515 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:04.971824 | orchestrator | 2026-01-01 00:50:04 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:04.971875 | orchestrator | 2026-01-01 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:08.091392 | orchestrator | 2026-01-01 00:50:08 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:08.094287 | orchestrator | 2026-01-01 00:50:08 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:50:08.131083 | orchestrator | 2026-01-01 00:50:08 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:08.132267 | orchestrator | 2026-01-01 00:50:08 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:08.135628 | orchestrator | 2026-01-01 00:50:08 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:08.135674 | orchestrator | 2026-01-01 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:11.199258 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:11.200783 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:50:11.200809 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:11.205707 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:11.205809 | orchestrator | 2026-01-01 00:50:11 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:11.205829 | orchestrator | 2026-01-01 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:14.351270 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:14.352146 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state STARTED 2026-01-01 00:50:14.354086 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:14.356881 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:14.358615 | orchestrator | 2026-01-01 00:50:14 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:14.358651 | orchestrator | 2026-01-01 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:17.399060 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:17.400557 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 75fd94d1-cf62-48a3-a795-462fb68a0a43 is in state SUCCESS 2026-01-01 00:50:17.401846 | orchestrator | 2026-01-01 00:50:17.401878 | orchestrator | 2026-01-01 00:50:17.401887 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-01 00:50:17.401896 | orchestrator | 2026-01-01 00:50:17.401905 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-01 00:50:17.401913 | orchestrator | Thursday 01 January 2026 00:45:34 +0000 (0:00:00.240) 0:00:00.240 ****** 2026-01-01 00:50:17.401922 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:50:17.401931 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:50:17.401939 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:50:17.401947 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.401955 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.401962 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.401970 | orchestrator | 2026-01-01 00:50:17.401978 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-01 00:50:17.401986 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.835) 0:00:01.076 ****** 2026-01-01 00:50:17.401994 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.402003 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.402011 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.402064 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.402072 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.402080 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.402088 | orchestrator | 2026-01-01 00:50:17.402097 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-01 00:50:17.402106 | orchestrator | Thursday 01 January 2026 00:45:36 +0000 (0:00:00.902) 0:00:01.979 ****** 2026-01-01 00:50:17.402115 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.402149 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.402158 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.402167 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.402175 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.402184 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.402192 | orchestrator | 2026-01-01 00:50:17.402201 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-01 00:50:17.402209 | orchestrator | Thursday 01 January 2026 00:45:37 +0000 (0:00:00.872) 0:00:02.852 ****** 2026-01-01 00:50:17.402261 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.402271 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.402280 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.402289 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.402297 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.402306 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.402315 | orchestrator | 2026-01-01 00:50:17.402323 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-01 00:50:17.402332 | orchestrator | Thursday 01 January 2026 00:45:40 +0000 (0:00:02.587) 0:00:05.439 ****** 2026-01-01 00:50:17.402341 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.402349 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.402357 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.402366 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.402374 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.402382 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.402420 | orchestrator | 2026-01-01 00:50:17.402429 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-01 00:50:17.402437 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:01.195) 0:00:06.635 ****** 2026-01-01 00:50:17.402447 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.402457 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.402467 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.402477 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.402487 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.402519 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.402529 | orchestrator | 2026-01-01 00:50:17.402539 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-01 00:50:17.402549 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.990) 0:00:07.625 ****** 2026-01-01 00:50:17.402559 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.402570 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.402580 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.402591 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.402601 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.402609 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.402617 | orchestrator | 2026-01-01 00:50:17.402626 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-01 00:50:17.402634 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.764) 0:00:08.390 ****** 2026-01-01 00:50:17.402643 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.402651 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.402660 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.402668 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.402677 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.402685 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.402693 | orchestrator | 2026-01-01 00:50:17.402702 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-01 00:50:17.402711 | orchestrator | Thursday 01 January 2026 00:45:43 +0000 (0:00:00.793) 0:00:09.184 ****** 2026-01-01 00:50:17.402719 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:50:17.402728 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:50:17.402736 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.402775 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:50:17.402785 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:50:17.402794 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.402803 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:50:17.402812 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:50:17.402820 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.402829 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:50:17.402850 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:50:17.402859 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.402868 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:50:17.402876 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:50:17.402885 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.402894 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-01 00:50:17.402902 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-01 00:50:17.402910 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.402983 | orchestrator | 2026-01-01 00:50:17.402992 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-01 00:50:17.403001 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.758) 0:00:09.942 ****** 2026-01-01 00:50:17.403010 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.403018 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.403027 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.403035 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.403044 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.403052 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.403060 | orchestrator | 2026-01-01 00:50:17.403069 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-01 00:50:17.403079 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:01.189) 0:00:11.131 ****** 2026-01-01 00:50:17.403088 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:50:17.403096 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:50:17.403105 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:50:17.403113 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.403121 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.403130 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.403138 | orchestrator | 2026-01-01 00:50:17.403146 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-01 00:50:17.403155 | orchestrator | Thursday 01 January 2026 00:45:46 +0000 (0:00:01.117) 0:00:12.248 ****** 2026-01-01 00:50:17.403164 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.403172 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.403180 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.403189 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.403197 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.403206 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.403214 | orchestrator | 2026-01-01 00:50:17.403223 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-01 00:50:17.403232 | orchestrator | Thursday 01 January 2026 00:45:52 +0000 (0:00:05.296) 0:00:17.545 ****** 2026-01-01 00:50:17.403240 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.403249 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.403257 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.403266 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.403274 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.403283 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.403298 | orchestrator | 2026-01-01 00:50:17.403307 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-01 00:50:17.403315 | orchestrator | Thursday 01 January 2026 00:45:53 +0000 (0:00:01.760) 0:00:19.305 ****** 2026-01-01 00:50:17.403324 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.407071 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.407124 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.407135 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.407146 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.407157 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.407168 | orchestrator | 2026-01-01 00:50:17.407180 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-01 00:50:17.407193 | orchestrator | Thursday 01 January 2026 00:45:57 +0000 (0:00:03.236) 0:00:22.542 ****** 2026-01-01 00:50:17.407204 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.407215 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.407226 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.407237 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.407247 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.407258 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.407269 | orchestrator | 2026-01-01 00:50:17.407280 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-01 00:50:17.407293 | orchestrator | Thursday 01 January 2026 00:45:58 +0000 (0:00:01.461) 0:00:24.004 ****** 2026-01-01 00:50:17.407311 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-01 00:50:17.407329 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-01 00:50:17.407346 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.407375 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-01 00:50:17.407395 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-01 00:50:17.407414 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.407435 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-01 00:50:17.407452 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-01 00:50:17.407470 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.407488 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-01 00:50:17.407506 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-01 00:50:17.407523 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.407542 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-01 00:50:17.407561 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-01 00:50:17.407579 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.407597 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-01 00:50:17.407616 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-01 00:50:17.407635 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.407654 | orchestrator | 2026-01-01 00:50:17.407674 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-01 00:50:17.407707 | orchestrator | Thursday 01 January 2026 00:46:00 +0000 (0:00:01.562) 0:00:25.566 ****** 2026-01-01 00:50:17.407719 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.407731 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.407741 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.407782 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.407794 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.407805 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.407815 | orchestrator | 2026-01-01 00:50:17.407826 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-01 00:50:17.407838 | orchestrator | Thursday 01 January 2026 00:46:01 +0000 (0:00:00.889) 0:00:26.456 ****** 2026-01-01 00:50:17.407849 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.407901 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.407930 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.407941 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.407951 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.407962 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.407972 | orchestrator | 2026-01-01 00:50:17.407983 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-01 00:50:17.407993 | orchestrator | 2026-01-01 00:50:17.408004 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-01 00:50:17.408015 | orchestrator | Thursday 01 January 2026 00:46:02 +0000 (0:00:01.632) 0:00:28.089 ****** 2026-01-01 00:50:17.408026 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.408036 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.408047 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.408057 | orchestrator | 2026-01-01 00:50:17.408068 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-01 00:50:17.408079 | orchestrator | Thursday 01 January 2026 00:46:05 +0000 (0:00:02.809) 0:00:30.898 ****** 2026-01-01 00:50:17.408089 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.408100 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.408110 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.408121 | orchestrator | 2026-01-01 00:50:17.408139 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-01 00:50:17.408150 | orchestrator | Thursday 01 January 2026 00:46:07 +0000 (0:00:01.545) 0:00:32.444 ****** 2026-01-01 00:50:17.408161 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.408171 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.408182 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.408192 | orchestrator | 2026-01-01 00:50:17.408203 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-01 00:50:17.408214 | orchestrator | Thursday 01 January 2026 00:46:08 +0000 (0:00:01.314) 0:00:33.758 ****** 2026-01-01 00:50:17.408227 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.408245 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.408270 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.408292 | orchestrator | 2026-01-01 00:50:17.408311 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-01 00:50:17.408329 | orchestrator | Thursday 01 January 2026 00:46:09 +0000 (0:00:00.993) 0:00:34.751 ****** 2026-01-01 00:50:17.408346 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.408363 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.408381 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.408400 | orchestrator | 2026-01-01 00:50:17.408418 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-01 00:50:17.408436 | orchestrator | Thursday 01 January 2026 00:46:09 +0000 (0:00:00.470) 0:00:35.222 ****** 2026-01-01 00:50:17.408455 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.408473 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.408490 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.408508 | orchestrator | 2026-01-01 00:50:17.408526 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-01 00:50:17.408545 | orchestrator | Thursday 01 January 2026 00:46:10 +0000 (0:00:01.023) 0:00:36.246 ****** 2026-01-01 00:50:17.408564 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.408583 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.408602 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.408622 | orchestrator | 2026-01-01 00:50:17.408641 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-01 00:50:17.408659 | orchestrator | Thursday 01 January 2026 00:46:12 +0000 (0:00:02.081) 0:00:38.327 ****** 2026-01-01 00:50:17.408679 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:50:17.408697 | orchestrator | 2026-01-01 00:50:17.408716 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-01 00:50:17.408736 | orchestrator | Thursday 01 January 2026 00:46:13 +0000 (0:00:00.715) 0:00:39.042 ****** 2026-01-01 00:50:17.408854 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.408874 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.408891 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.408942 | orchestrator | 2026-01-01 00:50:17.408961 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-01 00:50:17.408979 | orchestrator | Thursday 01 January 2026 00:46:16 +0000 (0:00:02.486) 0:00:41.529 ****** 2026-01-01 00:50:17.408997 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.409015 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.409031 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.409047 | orchestrator | 2026-01-01 00:50:17.409062 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-01 00:50:17.409078 | orchestrator | Thursday 01 January 2026 00:46:17 +0000 (0:00:00.953) 0:00:42.482 ****** 2026-01-01 00:50:17.409094 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.409110 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.409126 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.409138 | orchestrator | 2026-01-01 00:50:17.409147 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-01 00:50:17.409157 | orchestrator | Thursday 01 January 2026 00:46:18 +0000 (0:00:01.268) 0:00:43.751 ****** 2026-01-01 00:50:17.409166 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.409176 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.409185 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.409194 | orchestrator | 2026-01-01 00:50:17.409204 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-01 00:50:17.409226 | orchestrator | Thursday 01 January 2026 00:46:19 +0000 (0:00:01.479) 0:00:45.232 ****** 2026-01-01 00:50:17.409236 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.409246 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.409255 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.409265 | orchestrator | 2026-01-01 00:50:17.409274 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-01 00:50:17.409284 | orchestrator | Thursday 01 January 2026 00:46:20 +0000 (0:00:01.046) 0:00:46.278 ****** 2026-01-01 00:50:17.409293 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.409303 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.409312 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.409321 | orchestrator | 2026-01-01 00:50:17.409336 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-01 00:50:17.409353 | orchestrator | Thursday 01 January 2026 00:46:21 +0000 (0:00:00.912) 0:00:47.191 ****** 2026-01-01 00:50:17.409369 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.409384 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.409400 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.409416 | orchestrator | 2026-01-01 00:50:17.409431 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-01 00:50:17.409447 | orchestrator | Thursday 01 January 2026 00:46:23 +0000 (0:00:02.146) 0:00:49.338 ****** 2026-01-01 00:50:17.409463 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.409479 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.409494 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.409510 | orchestrator | 2026-01-01 00:50:17.409527 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-01 00:50:17.409542 | orchestrator | Thursday 01 January 2026 00:46:27 +0000 (0:00:03.162) 0:00:52.500 ****** 2026-01-01 00:50:17.409557 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.409573 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.409589 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.409604 | orchestrator | 2026-01-01 00:50:17.409629 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-01 00:50:17.409646 | orchestrator | Thursday 01 January 2026 00:46:28 +0000 (0:00:01.154) 0:00:53.655 ****** 2026-01-01 00:50:17.409674 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-01 00:50:17.409692 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-01 00:50:17.409708 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-01 00:50:17.409724 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-01 00:50:17.409740 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-01 00:50:17.409787 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-01 00:50:17.409804 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-01 00:50:17.409821 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-01 00:50:17.409838 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-01 00:50:17.409853 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-01 00:50:17.409868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-01 00:50:17.409882 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-01 00:50:17.409894 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.409908 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.409924 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.409941 | orchestrator | 2026-01-01 00:50:17.409958 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-01 00:50:17.409975 | orchestrator | Thursday 01 January 2026 00:47:11 +0000 (0:00:43.546) 0:01:37.202 ****** 2026-01-01 00:50:17.409993 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.410010 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.410062 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.410072 | orchestrator | 2026-01-01 00:50:17.410082 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-01 00:50:17.410092 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:00.339) 0:01:37.542 ****** 2026-01-01 00:50:17.410101 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.410111 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.410121 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.410130 | orchestrator | 2026-01-01 00:50:17.410139 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-01 00:50:17.410149 | orchestrator | Thursday 01 January 2026 00:47:13 +0000 (0:00:01.064) 0:01:38.606 ****** 2026-01-01 00:50:17.410159 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.410168 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.410178 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.410193 | orchestrator | 2026-01-01 00:50:17.410226 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-01 00:50:17.410249 | orchestrator | Thursday 01 January 2026 00:47:14 +0000 (0:00:01.719) 0:01:40.326 ****** 2026-01-01 00:50:17.410264 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.410281 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.410297 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.410313 | orchestrator | 2026-01-01 00:50:17.410329 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-01 00:50:17.410359 | orchestrator | Thursday 01 January 2026 00:47:41 +0000 (0:00:26.545) 0:02:06.871 ****** 2026-01-01 00:50:17.410375 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.410392 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.410408 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.410423 | orchestrator | 2026-01-01 00:50:17.410439 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-01 00:50:17.410456 | orchestrator | Thursday 01 January 2026 00:47:42 +0000 (0:00:00.685) 0:02:07.557 ****** 2026-01-01 00:50:17.410473 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.410488 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.410504 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.410520 | orchestrator | 2026-01-01 00:50:17.410537 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-01 00:50:17.410553 | orchestrator | Thursday 01 January 2026 00:47:42 +0000 (0:00:00.692) 0:02:08.249 ****** 2026-01-01 00:50:17.410570 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.410586 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.410601 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.410617 | orchestrator | 2026-01-01 00:50:17.410632 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-01 00:50:17.410647 | orchestrator | Thursday 01 January 2026 00:47:43 +0000 (0:00:00.652) 0:02:08.902 ****** 2026-01-01 00:50:17.410663 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.410679 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.410696 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.410712 | orchestrator | 2026-01-01 00:50:17.410736 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-01 00:50:17.410782 | orchestrator | Thursday 01 January 2026 00:47:44 +0000 (0:00:00.976) 0:02:09.879 ****** 2026-01-01 00:50:17.410799 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.410815 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.410831 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.410849 | orchestrator | 2026-01-01 00:50:17.410865 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-01 00:50:17.410881 | orchestrator | Thursday 01 January 2026 00:47:44 +0000 (0:00:00.318) 0:02:10.197 ****** 2026-01-01 00:50:17.410896 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.410913 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.410929 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.410945 | orchestrator | 2026-01-01 00:50:17.410961 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-01 00:50:17.410977 | orchestrator | Thursday 01 January 2026 00:47:45 +0000 (0:00:00.660) 0:02:10.857 ****** 2026-01-01 00:50:17.410993 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.411009 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.411026 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.411042 | orchestrator | 2026-01-01 00:50:17.411058 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-01 00:50:17.411073 | orchestrator | Thursday 01 January 2026 00:47:46 +0000 (0:00:00.667) 0:02:11.524 ****** 2026-01-01 00:50:17.411089 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.411104 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.411119 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.411134 | orchestrator | 2026-01-01 00:50:17.411150 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-01 00:50:17.411167 | orchestrator | Thursday 01 January 2026 00:47:47 +0000 (0:00:01.392) 0:02:12.916 ****** 2026-01-01 00:50:17.411183 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:50:17.411199 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:50:17.411215 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:50:17.411231 | orchestrator | 2026-01-01 00:50:17.411247 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-01 00:50:17.411264 | orchestrator | Thursday 01 January 2026 00:47:48 +0000 (0:00:00.827) 0:02:13.744 ****** 2026-01-01 00:50:17.411292 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.411308 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.411324 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.411340 | orchestrator | 2026-01-01 00:50:17.411357 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-01 00:50:17.411373 | orchestrator | Thursday 01 January 2026 00:47:48 +0000 (0:00:00.354) 0:02:14.098 ****** 2026-01-01 00:50:17.411389 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.411405 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.411421 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.411438 | orchestrator | 2026-01-01 00:50:17.411454 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-01 00:50:17.411470 | orchestrator | Thursday 01 January 2026 00:47:48 +0000 (0:00:00.278) 0:02:14.377 ****** 2026-01-01 00:50:17.411486 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.411502 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.411518 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.411533 | orchestrator | 2026-01-01 00:50:17.411548 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-01 00:50:17.411565 | orchestrator | Thursday 01 January 2026 00:47:49 +0000 (0:00:00.986) 0:02:15.364 ****** 2026-01-01 00:50:17.411581 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.411597 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.411613 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.411629 | orchestrator | 2026-01-01 00:50:17.411645 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-01 00:50:17.411662 | orchestrator | Thursday 01 January 2026 00:47:50 +0000 (0:00:00.704) 0:02:16.068 ****** 2026-01-01 00:50:17.411678 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-01 00:50:17.411706 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-01 00:50:17.411723 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-01 00:50:17.411741 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-01 00:50:17.411825 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-01 00:50:17.411842 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-01 00:50:17.411858 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-01 00:50:17.411875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-01 00:50:17.411892 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-01 00:50:17.411908 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-01 00:50:17.411924 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-01 00:50:17.411934 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-01 00:50:17.411943 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-01 00:50:17.411953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-01 00:50:17.411969 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-01 00:50:17.411979 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-01 00:50:17.411989 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-01 00:50:17.412007 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-01 00:50:17.412016 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-01 00:50:17.412026 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-01 00:50:17.412035 | orchestrator | 2026-01-01 00:50:17.412044 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-01 00:50:17.412054 | orchestrator | 2026-01-01 00:50:17.412064 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-01 00:50:17.412073 | orchestrator | Thursday 01 January 2026 00:47:53 +0000 (0:00:03.238) 0:02:19.307 ****** 2026-01-01 00:50:17.412082 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:50:17.412091 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:50:17.412101 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:50:17.412110 | orchestrator | 2026-01-01 00:50:17.412119 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-01 00:50:17.412129 | orchestrator | Thursday 01 January 2026 00:47:54 +0000 (0:00:00.538) 0:02:19.845 ****** 2026-01-01 00:50:17.412138 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:50:17.412148 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:50:17.412157 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:50:17.412231 | orchestrator | 2026-01-01 00:50:17.412275 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-01 00:50:17.412284 | orchestrator | Thursday 01 January 2026 00:47:55 +0000 (0:00:00.703) 0:02:20.549 ****** 2026-01-01 00:50:17.412291 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:50:17.412299 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:50:17.412307 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:50:17.412315 | orchestrator | 2026-01-01 00:50:17.412322 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-01 00:50:17.412330 | orchestrator | Thursday 01 January 2026 00:47:55 +0000 (0:00:00.342) 0:02:20.892 ****** 2026-01-01 00:50:17.412338 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:50:17.412347 | orchestrator | 2026-01-01 00:50:17.412354 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-01 00:50:17.412362 | orchestrator | Thursday 01 January 2026 00:47:56 +0000 (0:00:00.706) 0:02:21.598 ****** 2026-01-01 00:50:17.412370 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.412378 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.412386 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.412394 | orchestrator | 2026-01-01 00:50:17.412401 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-01 00:50:17.412409 | orchestrator | Thursday 01 January 2026 00:47:56 +0000 (0:00:00.407) 0:02:22.005 ****** 2026-01-01 00:50:17.412417 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.412425 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.412432 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.412440 | orchestrator | 2026-01-01 00:50:17.412448 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-01 00:50:17.412456 | orchestrator | Thursday 01 January 2026 00:47:56 +0000 (0:00:00.359) 0:02:22.365 ****** 2026-01-01 00:50:17.412463 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.412471 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.412479 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.412487 | orchestrator | 2026-01-01 00:50:17.412494 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-01 00:50:17.412502 | orchestrator | Thursday 01 January 2026 00:47:57 +0000 (0:00:00.442) 0:02:22.808 ****** 2026-01-01 00:50:17.412510 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.412518 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.412525 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.412533 | orchestrator | 2026-01-01 00:50:17.412549 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-01 00:50:17.412564 | orchestrator | Thursday 01 January 2026 00:47:58 +0000 (0:00:00.907) 0:02:23.716 ****** 2026-01-01 00:50:17.412571 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.412579 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.412587 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.412595 | orchestrator | 2026-01-01 00:50:17.412603 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-01 00:50:17.412610 | orchestrator | Thursday 01 January 2026 00:47:59 +0000 (0:00:01.093) 0:02:24.809 ****** 2026-01-01 00:50:17.412618 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.412626 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.412634 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.412641 | orchestrator | 2026-01-01 00:50:17.412649 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-01 00:50:17.412657 | orchestrator | Thursday 01 January 2026 00:48:00 +0000 (0:00:01.385) 0:02:26.195 ****** 2026-01-01 00:50:17.412665 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:50:17.412672 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:50:17.412680 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:50:17.412688 | orchestrator | 2026-01-01 00:50:17.412696 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-01 00:50:17.412703 | orchestrator | 2026-01-01 00:50:17.412711 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-01 00:50:17.412719 | orchestrator | Thursday 01 January 2026 00:48:11 +0000 (0:00:10.542) 0:02:36.737 ****** 2026-01-01 00:50:17.412727 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.412734 | orchestrator | 2026-01-01 00:50:17.412742 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-01 00:50:17.412781 | orchestrator | Thursday 01 January 2026 00:48:12 +0000 (0:00:00.898) 0:02:37.636 ****** 2026-01-01 00:50:17.412795 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.412807 | orchestrator | 2026-01-01 00:50:17.412826 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-01 00:50:17.412834 | orchestrator | Thursday 01 January 2026 00:48:12 +0000 (0:00:00.531) 0:02:38.167 ****** 2026-01-01 00:50:17.412842 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-01 00:50:17.412850 | orchestrator | 2026-01-01 00:50:17.412858 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-01 00:50:17.412865 | orchestrator | Thursday 01 January 2026 00:48:13 +0000 (0:00:00.557) 0:02:38.725 ****** 2026-01-01 00:50:17.412873 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.412881 | orchestrator | 2026-01-01 00:50:17.412888 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-01 00:50:17.412896 | orchestrator | Thursday 01 January 2026 00:48:14 +0000 (0:00:01.005) 0:02:39.730 ****** 2026-01-01 00:50:17.412904 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.412912 | orchestrator | 2026-01-01 00:50:17.412919 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-01 00:50:17.412927 | orchestrator | Thursday 01 January 2026 00:48:14 +0000 (0:00:00.604) 0:02:40.335 ****** 2026-01-01 00:50:17.412935 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:50:17.412942 | orchestrator | 2026-01-01 00:50:17.412950 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-01 00:50:17.412958 | orchestrator | Thursday 01 January 2026 00:48:16 +0000 (0:00:01.740) 0:02:42.075 ****** 2026-01-01 00:50:17.412966 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:50:17.412974 | orchestrator | 2026-01-01 00:50:17.412981 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-01 00:50:17.412989 | orchestrator | Thursday 01 January 2026 00:48:17 +0000 (0:00:00.985) 0:02:43.061 ****** 2026-01-01 00:50:17.412997 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.413005 | orchestrator | 2026-01-01 00:50:17.413012 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-01 00:50:17.413026 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.718) 0:02:43.780 ****** 2026-01-01 00:50:17.413034 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.413041 | orchestrator | 2026-01-01 00:50:17.413049 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-01 00:50:17.413057 | orchestrator | 2026-01-01 00:50:17.413065 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-01 00:50:17.413072 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.478) 0:02:44.259 ****** 2026-01-01 00:50:17.413080 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.413088 | orchestrator | 2026-01-01 00:50:17.413096 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-01 00:50:17.413103 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.148) 0:02:44.407 ****** 2026-01-01 00:50:17.413111 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:50:17.413119 | orchestrator | 2026-01-01 00:50:17.413126 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-01 00:50:17.413134 | orchestrator | Thursday 01 January 2026 00:48:19 +0000 (0:00:00.243) 0:02:44.650 ****** 2026-01-01 00:50:17.413142 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.413149 | orchestrator | 2026-01-01 00:50:17.413157 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-01 00:50:17.413165 | orchestrator | Thursday 01 January 2026 00:48:20 +0000 (0:00:00.989) 0:02:45.639 ****** 2026-01-01 00:50:17.413172 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.413180 | orchestrator | 2026-01-01 00:50:17.413188 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-01 00:50:17.413196 | orchestrator | Thursday 01 January 2026 00:48:22 +0000 (0:00:02.133) 0:02:47.773 ****** 2026-01-01 00:50:17.413203 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.413211 | orchestrator | 2026-01-01 00:50:17.413219 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-01 00:50:17.413227 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:00.781) 0:02:48.554 ****** 2026-01-01 00:50:17.413234 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.413242 | orchestrator | 2026-01-01 00:50:17.413255 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-01 00:50:17.413263 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:00.446) 0:02:49.001 ****** 2026-01-01 00:50:17.413271 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.413279 | orchestrator | 2026-01-01 00:50:17.413287 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-01 00:50:17.413294 | orchestrator | Thursday 01 January 2026 00:48:31 +0000 (0:00:07.938) 0:02:56.940 ****** 2026-01-01 00:50:17.413302 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.413310 | orchestrator | 2026-01-01 00:50:17.413317 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-01 00:50:17.413325 | orchestrator | Thursday 01 January 2026 00:48:46 +0000 (0:00:15.355) 0:03:12.295 ****** 2026-01-01 00:50:17.413333 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.413341 | orchestrator | 2026-01-01 00:50:17.413348 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-01 00:50:17.413356 | orchestrator | 2026-01-01 00:50:17.413364 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-01 00:50:17.413371 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:00.727) 0:03:13.022 ****** 2026-01-01 00:50:17.413379 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.413387 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.413395 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.413402 | orchestrator | 2026-01-01 00:50:17.413410 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-01 00:50:17.413418 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:00.348) 0:03:13.371 ****** 2026-01-01 00:50:17.413426 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.413443 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.413451 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.413458 | orchestrator | 2026-01-01 00:50:17.413466 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-01 00:50:17.413478 | orchestrator | Thursday 01 January 2026 00:48:48 +0000 (0:00:00.403) 0:03:13.774 ****** 2026-01-01 00:50:17.413486 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:50:17.413494 | orchestrator | 2026-01-01 00:50:17.413502 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-01 00:50:17.413510 | orchestrator | Thursday 01 January 2026 00:48:49 +0000 (0:00:00.849) 0:03:14.623 ****** 2026-01-01 00:50:17.413517 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413525 | orchestrator | 2026-01-01 00:50:17.413533 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-01 00:50:17.413541 | orchestrator | Thursday 01 January 2026 00:48:49 +0000 (0:00:00.793) 0:03:15.417 ****** 2026-01-01 00:50:17.413548 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413556 | orchestrator | 2026-01-01 00:50:17.413564 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-01 00:50:17.413572 | orchestrator | Thursday 01 January 2026 00:48:50 +0000 (0:00:00.919) 0:03:16.337 ****** 2026-01-01 00:50:17.413579 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.413587 | orchestrator | 2026-01-01 00:50:17.413595 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-01 00:50:17.413602 | orchestrator | Thursday 01 January 2026 00:48:51 +0000 (0:00:00.154) 0:03:16.492 ****** 2026-01-01 00:50:17.413610 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413618 | orchestrator | 2026-01-01 00:50:17.413626 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-01 00:50:17.413633 | orchestrator | Thursday 01 January 2026 00:48:51 +0000 (0:00:00.931) 0:03:17.423 ****** 2026-01-01 00:50:17.413641 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.413649 | orchestrator | 2026-01-01 00:50:17.413657 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-01 00:50:17.413664 | orchestrator | Thursday 01 January 2026 00:48:52 +0000 (0:00:00.139) 0:03:17.563 ****** 2026-01-01 00:50:17.413672 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.413680 | orchestrator | 2026-01-01 00:50:17.413687 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-01 00:50:17.413695 | orchestrator | Thursday 01 January 2026 00:48:52 +0000 (0:00:00.167) 0:03:17.730 ****** 2026-01-01 00:50:17.413703 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.413711 | orchestrator | 2026-01-01 00:50:17.413719 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-01 00:50:17.413726 | orchestrator | Thursday 01 January 2026 00:48:52 +0000 (0:00:00.151) 0:03:17.881 ****** 2026-01-01 00:50:17.413734 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.413742 | orchestrator | 2026-01-01 00:50:17.413768 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-01 00:50:17.413776 | orchestrator | Thursday 01 January 2026 00:48:52 +0000 (0:00:00.140) 0:03:18.021 ****** 2026-01-01 00:50:17.413784 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413791 | orchestrator | 2026-01-01 00:50:17.413799 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-01 00:50:17.413807 | orchestrator | Thursday 01 January 2026 00:48:58 +0000 (0:00:06.340) 0:03:24.362 ****** 2026-01-01 00:50:17.413815 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-01 00:50:17.413822 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-01 00:50:17.413830 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-01 00:50:17.413838 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-01 00:50:17.413851 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-01 00:50:17.413859 | orchestrator | 2026-01-01 00:50:17.413866 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-01 00:50:17.413874 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:42.641) 0:04:07.004 ****** 2026-01-01 00:50:17.413887 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413895 | orchestrator | 2026-01-01 00:50:17.413903 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-01 00:50:17.413910 | orchestrator | Thursday 01 January 2026 00:49:42 +0000 (0:00:01.308) 0:04:08.313 ****** 2026-01-01 00:50:17.413918 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413926 | orchestrator | 2026-01-01 00:50:17.413938 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-01 00:50:17.413955 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:01.609) 0:04:09.922 ****** 2026-01-01 00:50:17.413973 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:50:17.413985 | orchestrator | 2026-01-01 00:50:17.413996 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-01 00:50:17.414008 | orchestrator | Thursday 01 January 2026 00:49:45 +0000 (0:00:01.075) 0:04:10.998 ****** 2026-01-01 00:50:17.414187 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.414198 | orchestrator | 2026-01-01 00:50:17.414206 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-01 00:50:17.414214 | orchestrator | Thursday 01 January 2026 00:49:45 +0000 (0:00:00.109) 0:04:11.108 ****** 2026-01-01 00:50:17.414222 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-01 00:50:17.414230 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-01 00:50:17.414237 | orchestrator | 2026-01-01 00:50:17.414245 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-01 00:50:17.414253 | orchestrator | Thursday 01 January 2026 00:49:47 +0000 (0:00:01.741) 0:04:12.850 ****** 2026-01-01 00:50:17.414261 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.414268 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.414276 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.414284 | orchestrator | 2026-01-01 00:50:17.414297 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-01 00:50:17.414306 | orchestrator | Thursday 01 January 2026 00:49:47 +0000 (0:00:00.304) 0:04:13.154 ****** 2026-01-01 00:50:17.414313 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.414321 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.414329 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.414336 | orchestrator | 2026-01-01 00:50:17.414344 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-01 00:50:17.414352 | orchestrator | 2026-01-01 00:50:17.414360 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-01 00:50:17.414367 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:01.012) 0:04:14.166 ****** 2026-01-01 00:50:17.414375 | orchestrator | ok: [testbed-manager] 2026-01-01 00:50:17.414383 | orchestrator | 2026-01-01 00:50:17.414390 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-01 00:50:17.414398 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:00.208) 0:04:14.375 ****** 2026-01-01 00:50:17.414406 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-01 00:50:17.414414 | orchestrator | 2026-01-01 00:50:17.414421 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-01 00:50:17.414429 | orchestrator | Thursday 01 January 2026 00:49:49 +0000 (0:00:00.269) 0:04:14.644 ****** 2026-01-01 00:50:17.414437 | orchestrator | changed: [testbed-manager] 2026-01-01 00:50:17.414444 | orchestrator | 2026-01-01 00:50:17.414452 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-01 00:50:17.414467 | orchestrator | 2026-01-01 00:50:17.414475 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-01 00:50:17.414483 | orchestrator | Thursday 01 January 2026 00:49:55 +0000 (0:00:06.004) 0:04:20.649 ****** 2026-01-01 00:50:17.414491 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:50:17.414498 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:50:17.414506 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:50:17.414514 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:50:17.414521 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:50:17.414529 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:50:17.414537 | orchestrator | 2026-01-01 00:50:17.414544 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-01 00:50:17.414552 | orchestrator | Thursday 01 January 2026 00:49:56 +0000 (0:00:01.170) 0:04:21.820 ****** 2026-01-01 00:50:17.414560 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-01 00:50:17.414567 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-01 00:50:17.414575 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-01 00:50:17.414583 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-01 00:50:17.414591 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-01 00:50:17.414598 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-01 00:50:17.414606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-01 00:50:17.414614 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-01 00:50:17.414621 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-01 00:50:17.414629 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-01 00:50:17.414637 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-01 00:50:17.414645 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-01 00:50:17.414660 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-01 00:50:17.414668 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-01 00:50:17.414676 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-01 00:50:17.414683 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-01 00:50:17.414691 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-01 00:50:17.414699 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-01 00:50:17.414706 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-01 00:50:17.414714 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-01 00:50:17.414722 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-01 00:50:17.414730 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-01 00:50:17.414737 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-01 00:50:17.414798 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-01 00:50:17.414809 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-01 00:50:17.414819 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-01 00:50:17.414828 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-01 00:50:17.414847 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-01 00:50:17.414856 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-01 00:50:17.414865 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-01 00:50:17.414874 | orchestrator | 2026-01-01 00:50:17.414883 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-01 00:50:17.414892 | orchestrator | Thursday 01 January 2026 00:50:12 +0000 (0:00:15.752) 0:04:37.572 ****** 2026-01-01 00:50:17.414902 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.414911 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.414920 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.414928 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.414936 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.414943 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.414951 | orchestrator | 2026-01-01 00:50:17.414959 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-01 00:50:17.414965 | orchestrator | Thursday 01 January 2026 00:50:13 +0000 (0:00:01.214) 0:04:38.786 ****** 2026-01-01 00:50:17.414972 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:50:17.414978 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:50:17.414985 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:50:17.414992 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:50:17.414998 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:50:17.415005 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:50:17.415011 | orchestrator | 2026-01-01 00:50:17.415018 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:50:17.415024 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:50:17.415033 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-01 00:50:17.415040 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-01 00:50:17.415047 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-01 00:50:17.415053 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-01 00:50:17.415060 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-01 00:50:17.415066 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-01 00:50:17.415073 | orchestrator | 2026-01-01 00:50:17.415080 | orchestrator | 2026-01-01 00:50:17.415086 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:50:17.415093 | orchestrator | Thursday 01 January 2026 00:50:13 +0000 (0:00:00.526) 0:04:39.313 ****** 2026-01-01 00:50:17.415100 | orchestrator | =============================================================================== 2026-01-01 00:50:17.415106 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.55s 2026-01-01 00:50:17.415113 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.64s 2026-01-01 00:50:17.415120 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.55s 2026-01-01 00:50:17.415131 | orchestrator | Manage labels ---------------------------------------------------------- 15.75s 2026-01-01 00:50:17.415138 | orchestrator | kubectl : Install required packages ------------------------------------ 15.36s 2026-01-01 00:50:17.415149 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.54s 2026-01-01 00:50:17.415156 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.94s 2026-01-01 00:50:17.415162 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.34s 2026-01-01 00:50:17.415169 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.01s 2026-01-01 00:50:17.415175 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.30s 2026-01-01 00:50:17.415182 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.24s 2026-01-01 00:50:17.415188 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.24s 2026-01-01 00:50:17.415195 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.16s 2026-01-01 00:50:17.415201 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.81s 2026-01-01 00:50:17.415208 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.59s 2026-01-01 00:50:17.415214 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.49s 2026-01-01 00:50:17.415221 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.15s 2026-01-01 00:50:17.415228 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.13s 2026-01-01 00:50:17.415234 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.08s 2026-01-01 00:50:17.415244 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.76s 2026-01-01 00:50:17.415251 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:17.415257 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 2e9fbba1-6878-4b35-8be4-d91498933823 is in state STARTED 2026-01-01 00:50:17.415264 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:17.415271 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 0eb6b141-9625-4a0b-aa4c-979ca9985df5 is in state STARTED 2026-01-01 00:50:17.415277 | orchestrator | 2026-01-01 00:50:17 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:17.415284 | orchestrator | 2026-01-01 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:20.545139 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:20.545247 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:20.545263 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 2e9fbba1-6878-4b35-8be4-d91498933823 is in state STARTED 2026-01-01 00:50:20.545275 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:20.545287 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 0eb6b141-9625-4a0b-aa4c-979ca9985df5 is in state STARTED 2026-01-01 00:50:20.545299 | orchestrator | 2026-01-01 00:50:20 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:20.545311 | orchestrator | 2026-01-01 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:23.568880 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:23.573452 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:23.576161 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 2e9fbba1-6878-4b35-8be4-d91498933823 is in state STARTED 2026-01-01 00:50:23.577400 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:23.578679 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 0eb6b141-9625-4a0b-aa4c-979ca9985df5 is in state STARTED 2026-01-01 00:50:23.580448 | orchestrator | 2026-01-01 00:50:23 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:23.580681 | orchestrator | 2026-01-01 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:26.630878 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:26.635719 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:26.639915 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 2e9fbba1-6878-4b35-8be4-d91498933823 is in state SUCCESS 2026-01-01 00:50:26.642827 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:26.645613 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 0eb6b141-9625-4a0b-aa4c-979ca9985df5 is in state STARTED 2026-01-01 00:50:26.648791 | orchestrator | 2026-01-01 00:50:26 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:26.650058 | orchestrator | 2026-01-01 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:29.715555 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:29.718534 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:29.723535 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:29.725250 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 0eb6b141-9625-4a0b-aa4c-979ca9985df5 is in state STARTED 2026-01-01 00:50:29.728394 | orchestrator | 2026-01-01 00:50:29 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:29.730223 | orchestrator | 2026-01-01 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:32.769991 | orchestrator | 2026-01-01 00:50:32 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:32.770730 | orchestrator | 2026-01-01 00:50:32 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:32.771741 | orchestrator | 2026-01-01 00:50:32 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:32.772312 | orchestrator | 2026-01-01 00:50:32 | INFO  | Task 0eb6b141-9625-4a0b-aa4c-979ca9985df5 is in state SUCCESS 2026-01-01 00:50:32.773953 | orchestrator | 2026-01-01 00:50:32 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:32.773979 | orchestrator | 2026-01-01 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:35.828030 | orchestrator | 2026-01-01 00:50:35 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:35.829142 | orchestrator | 2026-01-01 00:50:35 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:35.830729 | orchestrator | 2026-01-01 00:50:35 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:35.833857 | orchestrator | 2026-01-01 00:50:35 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:35.833914 | orchestrator | 2026-01-01 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:38.881396 | orchestrator | 2026-01-01 00:50:38 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:38.883183 | orchestrator | 2026-01-01 00:50:38 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:38.885246 | orchestrator | 2026-01-01 00:50:38 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:38.887327 | orchestrator | 2026-01-01 00:50:38 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:38.887370 | orchestrator | 2026-01-01 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:41.931194 | orchestrator | 2026-01-01 00:50:41 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:41.932986 | orchestrator | 2026-01-01 00:50:41 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:41.935900 | orchestrator | 2026-01-01 00:50:41 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:41.937917 | orchestrator | 2026-01-01 00:50:41 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:41.937967 | orchestrator | 2026-01-01 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:44.988884 | orchestrator | 2026-01-01 00:50:44 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:44.990225 | orchestrator | 2026-01-01 00:50:44 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:44.995433 | orchestrator | 2026-01-01 00:50:44 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:44.997805 | orchestrator | 2026-01-01 00:50:44 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:44.997930 | orchestrator | 2026-01-01 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:48.068948 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:48.069496 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:48.071042 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:48.072625 | orchestrator | 2026-01-01 00:50:48 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:48.072651 | orchestrator | 2026-01-01 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:51.128409 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:51.128715 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:51.129305 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:51.130098 | orchestrator | 2026-01-01 00:50:51 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:51.130125 | orchestrator | 2026-01-01 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:54.166847 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:54.167122 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:54.169456 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:54.170212 | orchestrator | 2026-01-01 00:50:54 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:54.170284 | orchestrator | 2026-01-01 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:50:57.227702 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:50:57.229841 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:50:57.231419 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:50:57.233162 | orchestrator | 2026-01-01 00:50:57 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:50:57.233195 | orchestrator | 2026-01-01 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:00.277102 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:00.277885 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:00.279071 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:00.280599 | orchestrator | 2026-01-01 00:51:00 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:51:00.280698 | orchestrator | 2026-01-01 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:03.319715 | orchestrator | 2026-01-01 00:51:03 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:03.320529 | orchestrator | 2026-01-01 00:51:03 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:03.321878 | orchestrator | 2026-01-01 00:51:03 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:03.322648 | orchestrator | 2026-01-01 00:51:03 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:51:03.322944 | orchestrator | 2026-01-01 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:06.369738 | orchestrator | 2026-01-01 00:51:06 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:06.371391 | orchestrator | 2026-01-01 00:51:06 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:06.373068 | orchestrator | 2026-01-01 00:51:06 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:06.374885 | orchestrator | 2026-01-01 00:51:06 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:51:06.374934 | orchestrator | 2026-01-01 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:09.420107 | orchestrator | 2026-01-01 00:51:09 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:09.420945 | orchestrator | 2026-01-01 00:51:09 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:09.422592 | orchestrator | 2026-01-01 00:51:09 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:09.424622 | orchestrator | 2026-01-01 00:51:09 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state STARTED 2026-01-01 00:51:09.424719 | orchestrator | 2026-01-01 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:12.475433 | orchestrator | 2026-01-01 00:51:12 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:12.476778 | orchestrator | 2026-01-01 00:51:12 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:12.479827 | orchestrator | 2026-01-01 00:51:12 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:12.481168 | orchestrator | 2026-01-01 00:51:12 | INFO  | Task 06cc00b5-fa2c-4854-bf41-c0ef9a3932db is in state SUCCESS 2026-01-01 00:51:12.483233 | orchestrator | 2026-01-01 00:51:12.483269 | orchestrator | 2026-01-01 00:51:12.483281 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-01 00:51:12.483294 | orchestrator | 2026-01-01 00:51:12.483305 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-01 00:51:12.483319 | orchestrator | Thursday 01 January 2026 00:50:20 +0000 (0:00:00.226) 0:00:00.226 ****** 2026-01-01 00:51:12.483331 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-01 00:51:12.483343 | orchestrator | 2026-01-01 00:51:12.483354 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-01 00:51:12.483366 | orchestrator | Thursday 01 January 2026 00:50:20 +0000 (0:00:00.960) 0:00:01.186 ****** 2026-01-01 00:51:12.483377 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:12.483388 | orchestrator | 2026-01-01 00:51:12.483399 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-01 00:51:12.483411 | orchestrator | Thursday 01 January 2026 00:50:22 +0000 (0:00:01.830) 0:00:03.017 ****** 2026-01-01 00:51:12.483421 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:12.483433 | orchestrator | 2026-01-01 00:51:12.483444 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:12.483456 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:12.483469 | orchestrator | 2026-01-01 00:51:12.483480 | orchestrator | 2026-01-01 00:51:12.483491 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:12.483502 | orchestrator | Thursday 01 January 2026 00:50:23 +0000 (0:00:00.631) 0:00:03.648 ****** 2026-01-01 00:51:12.483512 | orchestrator | =============================================================================== 2026-01-01 00:51:12.483523 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.83s 2026-01-01 00:51:12.483534 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.96s 2026-01-01 00:51:12.483545 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.63s 2026-01-01 00:51:12.483556 | orchestrator | 2026-01-01 00:51:12.483567 | orchestrator | 2026-01-01 00:51:12.483578 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-01 00:51:12.483589 | orchestrator | 2026-01-01 00:51:12.483600 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-01 00:51:12.483611 | orchestrator | Thursday 01 January 2026 00:50:19 +0000 (0:00:00.212) 0:00:00.212 ****** 2026-01-01 00:51:12.483622 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:12.483634 | orchestrator | 2026-01-01 00:51:12.483645 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-01 00:51:12.483657 | orchestrator | Thursday 01 January 2026 00:50:20 +0000 (0:00:00.899) 0:00:01.111 ****** 2026-01-01 00:51:12.483668 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:12.483679 | orchestrator | 2026-01-01 00:51:12.483690 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-01 00:51:12.483701 | orchestrator | Thursday 01 January 2026 00:50:21 +0000 (0:00:00.999) 0:00:02.110 ****** 2026-01-01 00:51:12.483712 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-01 00:51:12.483723 | orchestrator | 2026-01-01 00:51:12.483734 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-01 00:51:12.483769 | orchestrator | Thursday 01 January 2026 00:50:22 +0000 (0:00:00.839) 0:00:02.950 ****** 2026-01-01 00:51:12.483780 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:12.483787 | orchestrator | 2026-01-01 00:51:12.483794 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-01 00:51:12.483814 | orchestrator | Thursday 01 January 2026 00:50:24 +0000 (0:00:01.989) 0:00:04.939 ****** 2026-01-01 00:51:12.483821 | orchestrator | changed: [testbed-manager] 2026-01-01 00:51:12.483827 | orchestrator | 2026-01-01 00:51:12.483836 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-01 00:51:12.483843 | orchestrator | Thursday 01 January 2026 00:50:25 +0000 (0:00:00.894) 0:00:05.834 ****** 2026-01-01 00:51:12.483851 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:51:12.483859 | orchestrator | 2026-01-01 00:51:12.483867 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-01 00:51:12.483874 | orchestrator | Thursday 01 January 2026 00:50:28 +0000 (0:00:02.919) 0:00:08.754 ****** 2026-01-01 00:51:12.483882 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 00:51:12.483889 | orchestrator | 2026-01-01 00:51:12.483897 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-01 00:51:12.483905 | orchestrator | Thursday 01 January 2026 00:50:29 +0000 (0:00:01.207) 0:00:09.962 ****** 2026-01-01 00:51:12.483913 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:12.483920 | orchestrator | 2026-01-01 00:51:12.483928 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-01 00:51:12.483935 | orchestrator | Thursday 01 January 2026 00:50:29 +0000 (0:00:00.422) 0:00:10.385 ****** 2026-01-01 00:51:12.483944 | orchestrator | ok: [testbed-manager] 2026-01-01 00:51:12.483951 | orchestrator | 2026-01-01 00:51:12.483959 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:12.483967 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:51:12.483975 | orchestrator | 2026-01-01 00:51:12.483982 | orchestrator | 2026-01-01 00:51:12.483990 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:12.483999 | orchestrator | Thursday 01 January 2026 00:50:30 +0000 (0:00:00.344) 0:00:10.730 ****** 2026-01-01 00:51:12.484006 | orchestrator | =============================================================================== 2026-01-01 00:51:12.484014 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.92s 2026-01-01 00:51:12.484088 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.99s 2026-01-01 00:51:12.484104 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.21s 2026-01-01 00:51:12.484124 | orchestrator | Create .kube directory -------------------------------------------------- 1.00s 2026-01-01 00:51:12.484132 | orchestrator | Get home directory of operator user ------------------------------------- 0.90s 2026-01-01 00:51:12.484139 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.90s 2026-01-01 00:51:12.484148 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2026-01-01 00:51:12.484159 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2026-01-01 00:51:12.484167 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2026-01-01 00:51:12.484175 | orchestrator | 2026-01-01 00:51:12.484183 | orchestrator | 2026-01-01 00:51:12.484191 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-01 00:51:12.484197 | orchestrator | 2026-01-01 00:51:12.484204 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-01 00:51:12.484211 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:00.097) 0:00:00.097 ****** 2026-01-01 00:51:12.484217 | orchestrator | ok: [localhost] => { 2026-01-01 00:51:12.484225 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-01 00:51:12.484232 | orchestrator | } 2026-01-01 00:51:12.484239 | orchestrator | 2026-01-01 00:51:12.484245 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-01 00:51:12.484252 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:00.047) 0:00:00.144 ****** 2026-01-01 00:51:12.484260 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-01 00:51:12.485521 | orchestrator | ...ignoring 2026-01-01 00:51:12.485536 | orchestrator | 2026-01-01 00:51:12.485543 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-01 00:51:12.485551 | orchestrator | Thursday 01 January 2026 00:48:51 +0000 (0:00:03.628) 0:00:03.773 ****** 2026-01-01 00:51:12.485558 | orchestrator | skipping: [localhost] 2026-01-01 00:51:12.485564 | orchestrator | 2026-01-01 00:51:12.485571 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-01 00:51:12.485578 | orchestrator | Thursday 01 January 2026 00:48:51 +0000 (0:00:00.044) 0:00:03.817 ****** 2026-01-01 00:51:12.485585 | orchestrator | ok: [localhost] 2026-01-01 00:51:12.485591 | orchestrator | 2026-01-01 00:51:12.485598 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:51:12.485605 | orchestrator | 2026-01-01 00:51:12.485612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:51:12.485619 | orchestrator | Thursday 01 January 2026 00:48:51 +0000 (0:00:00.178) 0:00:03.996 ****** 2026-01-01 00:51:12.485625 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:12.485632 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:12.485639 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:12.485645 | orchestrator | 2026-01-01 00:51:12.485652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:51:12.485658 | orchestrator | Thursday 01 January 2026 00:48:51 +0000 (0:00:00.286) 0:00:04.283 ****** 2026-01-01 00:51:12.485665 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-01 00:51:12.485673 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-01 00:51:12.485679 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-01 00:51:12.485686 | orchestrator | 2026-01-01 00:51:12.485692 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-01 00:51:12.485699 | orchestrator | 2026-01-01 00:51:12.485706 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-01 00:51:12.485712 | orchestrator | Thursday 01 January 2026 00:48:53 +0000 (0:00:01.181) 0:00:05.464 ****** 2026-01-01 00:51:12.485720 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:51:12.485727 | orchestrator | 2026-01-01 00:51:12.485733 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-01 00:51:12.485763 | orchestrator | Thursday 01 January 2026 00:48:54 +0000 (0:00:01.045) 0:00:06.510 ****** 2026-01-01 00:51:12.485770 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:12.485778 | orchestrator | 2026-01-01 00:51:12.485784 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-01 00:51:12.485791 | orchestrator | Thursday 01 January 2026 00:48:55 +0000 (0:00:01.204) 0:00:07.715 ****** 2026-01-01 00:51:12.485797 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.485805 | orchestrator | 2026-01-01 00:51:12.485812 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-01 00:51:12.485818 | orchestrator | Thursday 01 January 2026 00:48:55 +0000 (0:00:00.511) 0:00:08.227 ****** 2026-01-01 00:51:12.485825 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.485831 | orchestrator | 2026-01-01 00:51:12.485838 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-01 00:51:12.485844 | orchestrator | Thursday 01 January 2026 00:48:56 +0000 (0:00:01.074) 0:00:09.301 ****** 2026-01-01 00:51:12.485851 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.485858 | orchestrator | 2026-01-01 00:51:12.485864 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-01 00:51:12.485871 | orchestrator | Thursday 01 January 2026 00:48:57 +0000 (0:00:01.071) 0:00:10.372 ****** 2026-01-01 00:51:12.485878 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.485884 | orchestrator | 2026-01-01 00:51:12.485905 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-01 00:51:12.485912 | orchestrator | Thursday 01 January 2026 00:48:59 +0000 (0:00:01.572) 0:00:11.945 ****** 2026-01-01 00:51:12.485919 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:51:12.485926 | orchestrator | 2026-01-01 00:51:12.485933 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-01 00:51:12.485949 | orchestrator | Thursday 01 January 2026 00:49:02 +0000 (0:00:03.431) 0:00:15.377 ****** 2026-01-01 00:51:12.485959 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:12.485970 | orchestrator | 2026-01-01 00:51:12.485980 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-01 00:51:12.485991 | orchestrator | Thursday 01 January 2026 00:49:03 +0000 (0:00:00.935) 0:00:16.312 ****** 2026-01-01 00:51:12.486002 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.486012 | orchestrator | 2026-01-01 00:51:12.486095 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-01 00:51:12.486106 | orchestrator | Thursday 01 January 2026 00:49:04 +0000 (0:00:00.306) 0:00:16.620 ****** 2026-01-01 00:51:12.486117 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.486127 | orchestrator | 2026-01-01 00:51:12.486137 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-01 00:51:12.486148 | orchestrator | Thursday 01 January 2026 00:49:04 +0000 (0:00:00.337) 0:00:16.957 ****** 2026-01-01 00:51:12.486163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486208 | orchestrator | 2026-01-01 00:51:12.486219 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-01 00:51:12.486230 | orchestrator | Thursday 01 January 2026 00:49:05 +0000 (0:00:01.011) 0:00:17.968 ****** 2026-01-01 00:51:12.486272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486318 | orchestrator | 2026-01-01 00:51:12.486325 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-01 00:51:12.486332 | orchestrator | Thursday 01 January 2026 00:49:07 +0000 (0:00:02.163) 0:00:20.132 ****** 2026-01-01 00:51:12.486338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-01 00:51:12.486345 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-01 00:51:12.486352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-01 00:51:12.486359 | orchestrator | 2026-01-01 00:51:12.486365 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-01 00:51:12.486372 | orchestrator | Thursday 01 January 2026 00:49:09 +0000 (0:00:01.600) 0:00:21.733 ****** 2026-01-01 00:51:12.486379 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-01 00:51:12.486385 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-01 00:51:12.486392 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-01 00:51:12.486399 | orchestrator | 2026-01-01 00:51:12.486405 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-01 00:51:12.486416 | orchestrator | Thursday 01 January 2026 00:49:11 +0000 (0:00:02.599) 0:00:24.332 ****** 2026-01-01 00:51:12.486424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-01 00:51:12.486430 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-01 00:51:12.486437 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-01 00:51:12.486444 | orchestrator | 2026-01-01 00:51:12.486454 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-01 00:51:12.486461 | orchestrator | Thursday 01 January 2026 00:49:13 +0000 (0:00:01.504) 0:00:25.837 ****** 2026-01-01 00:51:12.486467 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-01 00:51:12.486474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-01 00:51:12.486481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-01 00:51:12.486487 | orchestrator | 2026-01-01 00:51:12.486494 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-01 00:51:12.486501 | orchestrator | Thursday 01 January 2026 00:49:15 +0000 (0:00:01.848) 0:00:27.686 ****** 2026-01-01 00:51:12.486507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-01 00:51:12.486514 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-01 00:51:12.486521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-01 00:51:12.486527 | orchestrator | 2026-01-01 00:51:12.486534 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-01 00:51:12.486540 | orchestrator | Thursday 01 January 2026 00:49:16 +0000 (0:00:01.613) 0:00:29.302 ****** 2026-01-01 00:51:12.486547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-01 00:51:12.486553 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-01 00:51:12.486560 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-01 00:51:12.486567 | orchestrator | 2026-01-01 00:51:12.486573 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-01 00:51:12.486580 | orchestrator | Thursday 01 January 2026 00:49:18 +0000 (0:00:01.639) 0:00:30.942 ****** 2026-01-01 00:51:12.486591 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.486597 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:51:12.486604 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:51:12.486611 | orchestrator | 2026-01-01 00:51:12.486617 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-01 00:51:12.486624 | orchestrator | Thursday 01 January 2026 00:49:18 +0000 (0:00:00.509) 0:00:31.451 ****** 2026-01-01 00:51:12.486631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:51:12.486662 | orchestrator | 2026-01-01 00:51:12.486669 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-01 00:51:12.486676 | orchestrator | Thursday 01 January 2026 00:49:21 +0000 (0:00:02.020) 0:00:33.472 ****** 2026-01-01 00:51:12.486682 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:12.486694 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:12.486700 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:12.486707 | orchestrator | 2026-01-01 00:51:12.486714 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-01 00:51:12.486720 | orchestrator | Thursday 01 January 2026 00:49:21 +0000 (0:00:00.968) 0:00:34.440 ****** 2026-01-01 00:51:12.486727 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:12.486734 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:12.486758 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:12.486765 | orchestrator | 2026-01-01 00:51:12.486771 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-01 00:51:12.486778 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:06.762) 0:00:41.203 ****** 2026-01-01 00:51:12.486785 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:12.486791 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:12.486798 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:12.486804 | orchestrator | 2026-01-01 00:51:12.486811 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-01 00:51:12.486818 | orchestrator | 2026-01-01 00:51:12.486824 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-01 00:51:12.486831 | orchestrator | Thursday 01 January 2026 00:49:29 +0000 (0:00:00.353) 0:00:41.556 ****** 2026-01-01 00:51:12.486838 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:12.486844 | orchestrator | 2026-01-01 00:51:12.486851 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-01 00:51:12.486858 | orchestrator | Thursday 01 January 2026 00:49:29 +0000 (0:00:00.613) 0:00:42.169 ****** 2026-01-01 00:51:12.486865 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:51:12.486871 | orchestrator | 2026-01-01 00:51:12.486878 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-01 00:51:12.486885 | orchestrator | Thursday 01 January 2026 00:49:30 +0000 (0:00:00.299) 0:00:42.468 ****** 2026-01-01 00:51:12.486891 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:12.486898 | orchestrator | 2026-01-01 00:51:12.486904 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-01 00:51:12.486911 | orchestrator | Thursday 01 January 2026 00:49:31 +0000 (0:00:01.678) 0:00:44.147 ****** 2026-01-01 00:51:12.486918 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:51:12.486924 | orchestrator | 2026-01-01 00:51:12.486931 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-01 00:51:12.486938 | orchestrator | 2026-01-01 00:51:12.486944 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-01 00:51:12.486951 | orchestrator | Thursday 01 January 2026 00:50:29 +0000 (0:00:57.479) 0:01:41.626 ****** 2026-01-01 00:51:12.486958 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:12.486964 | orchestrator | 2026-01-01 00:51:12.486971 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-01 00:51:12.486978 | orchestrator | Thursday 01 January 2026 00:50:29 +0000 (0:00:00.707) 0:01:42.334 ****** 2026-01-01 00:51:12.486985 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:51:12.486991 | orchestrator | 2026-01-01 00:51:12.486998 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-01 00:51:12.487005 | orchestrator | Thursday 01 January 2026 00:50:30 +0000 (0:00:00.322) 0:01:42.657 ****** 2026-01-01 00:51:12.487011 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:12.487018 | orchestrator | 2026-01-01 00:51:12.487025 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-01 00:51:12.487031 | orchestrator | Thursday 01 January 2026 00:50:32 +0000 (0:00:02.296) 0:01:44.953 ****** 2026-01-01 00:51:12.487038 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:51:12.487045 | orchestrator | 2026-01-01 00:51:12.487051 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-01 00:51:12.487058 | orchestrator | 2026-01-01 00:51:12.487065 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-01 00:51:12.487076 | orchestrator | Thursday 01 January 2026 00:50:47 +0000 (0:00:15.237) 0:02:00.190 ****** 2026-01-01 00:51:12.487082 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:12.487089 | orchestrator | 2026-01-01 00:51:12.487100 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-01 00:51:12.487107 | orchestrator | Thursday 01 January 2026 00:50:48 +0000 (0:00:00.599) 0:02:00.789 ****** 2026-01-01 00:51:12.487114 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:51:12.487120 | orchestrator | 2026-01-01 00:51:12.487127 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-01 00:51:12.487134 | orchestrator | Thursday 01 January 2026 00:50:48 +0000 (0:00:00.261) 0:02:01.051 ****** 2026-01-01 00:51:12.487144 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:12.487151 | orchestrator | 2026-01-01 00:51:12.487157 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-01 00:51:12.487164 | orchestrator | Thursday 01 January 2026 00:50:55 +0000 (0:00:06.848) 0:02:07.900 ****** 2026-01-01 00:51:12.487171 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:51:12.487177 | orchestrator | 2026-01-01 00:51:12.487184 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-01 00:51:12.487191 | orchestrator | 2026-01-01 00:51:12.487197 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-01 00:51:12.487204 | orchestrator | Thursday 01 January 2026 00:51:05 +0000 (0:00:10.191) 0:02:18.092 ****** 2026-01-01 00:51:12.487211 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:51:12.487217 | orchestrator | 2026-01-01 00:51:12.487224 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-01 00:51:12.487231 | orchestrator | Thursday 01 January 2026 00:51:06 +0000 (0:00:00.666) 0:02:18.758 ****** 2026-01-01 00:51:12.487238 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-01 00:51:12.487244 | orchestrator | enable_outward_rabbitmq_True 2026-01-01 00:51:12.487251 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-01 00:51:12.487257 | orchestrator | outward_rabbitmq_restart 2026-01-01 00:51:12.487264 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:51:12.487271 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:51:12.487277 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:51:12.487284 | orchestrator | 2026-01-01 00:51:12.487291 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-01 00:51:12.487297 | orchestrator | skipping: no hosts matched 2026-01-01 00:51:12.487304 | orchestrator | 2026-01-01 00:51:12.487311 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-01 00:51:12.487317 | orchestrator | skipping: no hosts matched 2026-01-01 00:51:12.487324 | orchestrator | 2026-01-01 00:51:12.487330 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-01 00:51:12.487337 | orchestrator | skipping: no hosts matched 2026-01-01 00:51:12.487344 | orchestrator | 2026-01-01 00:51:12.487350 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:51:12.487358 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-01 00:51:12.487365 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-01 00:51:12.487372 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:51:12.487379 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 00:51:12.487385 | orchestrator | 2026-01-01 00:51:12.487392 | orchestrator | 2026-01-01 00:51:12.487399 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:51:12.487414 | orchestrator | Thursday 01 January 2026 00:51:08 +0000 (0:00:02.614) 0:02:21.373 ****** 2026-01-01 00:51:12.487421 | orchestrator | =============================================================================== 2026-01-01 00:51:12.487427 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.91s 2026-01-01 00:51:12.487434 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.82s 2026-01-01 00:51:12.487441 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.76s 2026-01-01 00:51:12.487447 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.63s 2026-01-01 00:51:12.487454 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.43s 2026-01-01 00:51:12.487461 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.61s 2026-01-01 00:51:12.487467 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.60s 2026-01-01 00:51:12.487474 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.16s 2026-01-01 00:51:12.487480 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.02s 2026-01-01 00:51:12.487487 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.92s 2026-01-01 00:51:12.487494 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.85s 2026-01-01 00:51:12.487500 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.64s 2026-01-01 00:51:12.487507 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.61s 2026-01-01 00:51:12.487513 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.60s 2026-01-01 00:51:12.487520 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.57s 2026-01-01 00:51:12.487527 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.50s 2026-01-01 00:51:12.487534 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.20s 2026-01-01 00:51:12.487544 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.18s 2026-01-01 00:51:12.487551 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.07s 2026-01-01 00:51:12.487557 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.07s 2026-01-01 00:51:12.487564 | orchestrator | 2026-01-01 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:15.522890 | orchestrator | 2026-01-01 00:51:15 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:15.525461 | orchestrator | 2026-01-01 00:51:15 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:15.526778 | orchestrator | 2026-01-01 00:51:15 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:15.526876 | orchestrator | 2026-01-01 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:18.580684 | orchestrator | 2026-01-01 00:51:18 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:18.581291 | orchestrator | 2026-01-01 00:51:18 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:18.582845 | orchestrator | 2026-01-01 00:51:18 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:18.582884 | orchestrator | 2026-01-01 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:21.631092 | orchestrator | 2026-01-01 00:51:21 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:21.632807 | orchestrator | 2026-01-01 00:51:21 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:21.635664 | orchestrator | 2026-01-01 00:51:21 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:21.635789 | orchestrator | 2026-01-01 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:24.676639 | orchestrator | 2026-01-01 00:51:24 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:24.677237 | orchestrator | 2026-01-01 00:51:24 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:24.678806 | orchestrator | 2026-01-01 00:51:24 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:24.678905 | orchestrator | 2026-01-01 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:27.741460 | orchestrator | 2026-01-01 00:51:27 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:27.750428 | orchestrator | 2026-01-01 00:51:27 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:27.764400 | orchestrator | 2026-01-01 00:51:27 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:27.768188 | orchestrator | 2026-01-01 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:30.823292 | orchestrator | 2026-01-01 00:51:30 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:30.826173 | orchestrator | 2026-01-01 00:51:30 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:30.828236 | orchestrator | 2026-01-01 00:51:30 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:30.828276 | orchestrator | 2026-01-01 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:33.871208 | orchestrator | 2026-01-01 00:51:33 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:33.872426 | orchestrator | 2026-01-01 00:51:33 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:33.874904 | orchestrator | 2026-01-01 00:51:33 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:33.874938 | orchestrator | 2026-01-01 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:36.916929 | orchestrator | 2026-01-01 00:51:36 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:36.923612 | orchestrator | 2026-01-01 00:51:36 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:36.928663 | orchestrator | 2026-01-01 00:51:36 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:36.928715 | orchestrator | 2026-01-01 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:39.972925 | orchestrator | 2026-01-01 00:51:39 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:39.974371 | orchestrator | 2026-01-01 00:51:39 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:39.976590 | orchestrator | 2026-01-01 00:51:39 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:39.976652 | orchestrator | 2026-01-01 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:43.036844 | orchestrator | 2026-01-01 00:51:43 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:43.039703 | orchestrator | 2026-01-01 00:51:43 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:43.039727 | orchestrator | 2026-01-01 00:51:43 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:43.039767 | orchestrator | 2026-01-01 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:46.085823 | orchestrator | 2026-01-01 00:51:46 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:46.087625 | orchestrator | 2026-01-01 00:51:46 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:46.089118 | orchestrator | 2026-01-01 00:51:46 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:46.089160 | orchestrator | 2026-01-01 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:49.148140 | orchestrator | 2026-01-01 00:51:49 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:49.150601 | orchestrator | 2026-01-01 00:51:49 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:49.150640 | orchestrator | 2026-01-01 00:51:49 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:49.150649 | orchestrator | 2026-01-01 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:52.209398 | orchestrator | 2026-01-01 00:51:52 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:52.209528 | orchestrator | 2026-01-01 00:51:52 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:52.210801 | orchestrator | 2026-01-01 00:51:52 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:52.210848 | orchestrator | 2026-01-01 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:55.259564 | orchestrator | 2026-01-01 00:51:55 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:55.262359 | orchestrator | 2026-01-01 00:51:55 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:55.264646 | orchestrator | 2026-01-01 00:51:55 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:55.264680 | orchestrator | 2026-01-01 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:51:58.316088 | orchestrator | 2026-01-01 00:51:58 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:51:58.317012 | orchestrator | 2026-01-01 00:51:58 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:51:58.318304 | orchestrator | 2026-01-01 00:51:58 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:51:58.318332 | orchestrator | 2026-01-01 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:01.362551 | orchestrator | 2026-01-01 00:52:01 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:01.363847 | orchestrator | 2026-01-01 00:52:01 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:52:01.367597 | orchestrator | 2026-01-01 00:52:01 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:01.367664 | orchestrator | 2026-01-01 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:04.423664 | orchestrator | 2026-01-01 00:52:04 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:04.425646 | orchestrator | 2026-01-01 00:52:04 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state STARTED 2026-01-01 00:52:04.429445 | orchestrator | 2026-01-01 00:52:04 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:04.429492 | orchestrator | 2026-01-01 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:07.485517 | orchestrator | 2026-01-01 00:52:07 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:07.488218 | orchestrator | 2026-01-01 00:52:07 | INFO  | Task 5d877855-77e9-4ba5-a2da-ddce78f523cb is in state SUCCESS 2026-01-01 00:52:07.498325 | orchestrator | 2026-01-01 00:52:07.498395 | orchestrator | 2026-01-01 00:52:07.498407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:52:07.498418 | orchestrator | 2026-01-01 00:52:07.498428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:52:07.498448 | orchestrator | Thursday 01 January 2026 00:49:38 +0000 (0:00:00.609) 0:00:00.609 ****** 2026-01-01 00:52:07.498459 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:52:07.498470 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:52:07.498479 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:52:07.498489 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.498498 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.498508 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.498517 | orchestrator | 2026-01-01 00:52:07.498527 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:52:07.498537 | orchestrator | Thursday 01 January 2026 00:49:40 +0000 (0:00:01.101) 0:00:01.710 ****** 2026-01-01 00:52:07.498547 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-01 00:52:07.498557 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-01 00:52:07.498566 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-01 00:52:07.498576 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-01 00:52:07.498585 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-01 00:52:07.498595 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-01 00:52:07.498604 | orchestrator | 2026-01-01 00:52:07.498614 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-01 00:52:07.498623 | orchestrator | 2026-01-01 00:52:07.498633 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-01 00:52:07.498643 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:01.190) 0:00:02.900 ****** 2026-01-01 00:52:07.500065 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:52:07.500105 | orchestrator | 2026-01-01 00:52:07.500114 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-01 00:52:07.500123 | orchestrator | Thursday 01 January 2026 00:49:42 +0000 (0:00:01.173) 0:00:04.074 ****** 2026-01-01 00:52:07.500135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.500147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.500156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.500164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.500191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.500200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.500208 | orchestrator | 2026-01-01 00:52:07.501506 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-01 00:52:07.501530 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:02.028) 0:00:06.103 ****** 2026-01-01 00:52:07.501548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501608 | orchestrator | 2026-01-01 00:52:07.501617 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-01 00:52:07.501625 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:02.232) 0:00:08.336 ****** 2026-01-01 00:52:07.501633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501696 | orchestrator | 2026-01-01 00:52:07.501704 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-01 00:52:07.501713 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:01.534) 0:00:09.870 ****** 2026-01-01 00:52:07.501721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501829 | orchestrator | 2026-01-01 00:52:07.501842 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-01 00:52:07.501851 | orchestrator | Thursday 01 January 2026 00:49:50 +0000 (0:00:01.855) 0:00:11.726 ****** 2026-01-01 00:52:07.501863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.501917 | orchestrator | 2026-01-01 00:52:07.501925 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-01 00:52:07.501934 | orchestrator | Thursday 01 January 2026 00:49:51 +0000 (0:00:01.449) 0:00:13.175 ****** 2026-01-01 00:52:07.501942 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:07.501950 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:07.501958 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:07.501966 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.501974 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.501982 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.501989 | orchestrator | 2026-01-01 00:52:07.501997 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-01 00:52:07.502005 | orchestrator | Thursday 01 January 2026 00:49:54 +0000 (0:00:03.395) 0:00:16.571 ****** 2026-01-01 00:52:07.502044 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-01 00:52:07.502056 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-01 00:52:07.502064 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-01 00:52:07.502071 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-01 00:52:07.502079 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-01 00:52:07.502087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-01 00:52:07.502095 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:52:07.502103 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:52:07.502117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:52:07.502125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:52:07.502133 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:52:07.502144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-01 00:52:07.502153 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-01 00:52:07.502163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-01 00:52:07.502171 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-01 00:52:07.502179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-01 00:52:07.502187 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-01 00:52:07.502201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-01 00:52:07.502209 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:52:07.502218 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:52:07.502226 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:52:07.502234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:52:07.502242 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:52:07.502250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-01 00:52:07.502257 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:52:07.502265 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:52:07.502273 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:52:07.502281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:52:07.502289 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:52:07.502297 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-01 00:52:07.502305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:52:07.502313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:52:07.502321 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:52:07.502329 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:52:07.502337 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:52:07.502343 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-01 00:52:07.502350 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-01 00:52:07.502357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-01 00:52:07.502364 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-01 00:52:07.502370 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-01 00:52:07.502377 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-01 00:52:07.502385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-01 00:52:07.502391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-01 00:52:07.502398 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-01 00:52:07.502409 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-01 00:52:07.502416 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-01 00:52:07.502434 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-01 00:52:07.502440 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-01 00:52:07.502447 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-01 00:52:07.502454 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-01 00:52:07.502461 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-01 00:52:07.502467 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-01 00:52:07.502474 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-01 00:52:07.502481 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-01 00:52:07.502487 | orchestrator | 2026-01-01 00:52:07.502494 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:52:07.502501 | orchestrator | Thursday 01 January 2026 00:50:17 +0000 (0:00:22.768) 0:00:39.340 ****** 2026-01-01 00:52:07.502507 | orchestrator | 2026-01-01 00:52:07.502514 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:52:07.502520 | orchestrator | Thursday 01 January 2026 00:50:17 +0000 (0:00:00.153) 0:00:39.493 ****** 2026-01-01 00:52:07.502527 | orchestrator | 2026-01-01 00:52:07.502534 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:52:07.502540 | orchestrator | Thursday 01 January 2026 00:50:18 +0000 (0:00:00.221) 0:00:39.715 ****** 2026-01-01 00:52:07.502547 | orchestrator | 2026-01-01 00:52:07.502554 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:52:07.502560 | orchestrator | Thursday 01 January 2026 00:50:18 +0000 (0:00:00.170) 0:00:39.885 ****** 2026-01-01 00:52:07.502567 | orchestrator | 2026-01-01 00:52:07.502573 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:52:07.502580 | orchestrator | Thursday 01 January 2026 00:50:18 +0000 (0:00:00.079) 0:00:39.965 ****** 2026-01-01 00:52:07.502586 | orchestrator | 2026-01-01 00:52:07.502593 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-01 00:52:07.502600 | orchestrator | Thursday 01 January 2026 00:50:18 +0000 (0:00:00.079) 0:00:40.044 ****** 2026-01-01 00:52:07.502606 | orchestrator | 2026-01-01 00:52:07.502613 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-01 00:52:07.502620 | orchestrator | Thursday 01 January 2026 00:50:18 +0000 (0:00:00.073) 0:00:40.117 ****** 2026-01-01 00:52:07.502626 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:52:07.502633 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:52:07.502640 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:52:07.502647 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.502653 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.502660 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.502666 | orchestrator | 2026-01-01 00:52:07.502673 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-01 00:52:07.502680 | orchestrator | Thursday 01 January 2026 00:50:21 +0000 (0:00:03.143) 0:00:43.260 ****** 2026-01-01 00:52:07.502687 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.502693 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:52:07.502700 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:52:07.502706 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.502713 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.502720 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:52:07.502743 | orchestrator | 2026-01-01 00:52:07.502750 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-01 00:52:07.502756 | orchestrator | 2026-01-01 00:52:07.502763 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-01 00:52:07.502770 | orchestrator | Thursday 01 January 2026 00:50:51 +0000 (0:00:29.810) 0:01:13.071 ****** 2026-01-01 00:52:07.502776 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:52:07.502783 | orchestrator | 2026-01-01 00:52:07.502790 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-01 00:52:07.502797 | orchestrator | Thursday 01 January 2026 00:50:52 +0000 (0:00:00.851) 0:01:13.923 ****** 2026-01-01 00:52:07.502803 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:52:07.502810 | orchestrator | 2026-01-01 00:52:07.502817 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-01 00:52:07.502823 | orchestrator | Thursday 01 January 2026 00:50:52 +0000 (0:00:00.652) 0:01:14.575 ****** 2026-01-01 00:52:07.502830 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.502837 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.502844 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.502850 | orchestrator | 2026-01-01 00:52:07.502857 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-01 00:52:07.502864 | orchestrator | Thursday 01 January 2026 00:50:53 +0000 (0:00:01.066) 0:01:15.642 ****** 2026-01-01 00:52:07.502870 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.502877 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.502884 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.502894 | orchestrator | 2026-01-01 00:52:07.502901 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-01 00:52:07.502908 | orchestrator | Thursday 01 January 2026 00:50:54 +0000 (0:00:00.496) 0:01:16.138 ****** 2026-01-01 00:52:07.502914 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.502921 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.502928 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.502934 | orchestrator | 2026-01-01 00:52:07.502941 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-01 00:52:07.502948 | orchestrator | Thursday 01 January 2026 00:50:55 +0000 (0:00:00.569) 0:01:16.708 ****** 2026-01-01 00:52:07.502954 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.502961 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.502968 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.502974 | orchestrator | 2026-01-01 00:52:07.502981 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-01 00:52:07.502987 | orchestrator | Thursday 01 January 2026 00:50:55 +0000 (0:00:00.761) 0:01:17.470 ****** 2026-01-01 00:52:07.502994 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.503001 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.503007 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.503014 | orchestrator | 2026-01-01 00:52:07.503046 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-01 00:52:07.503053 | orchestrator | Thursday 01 January 2026 00:50:56 +0000 (0:00:00.781) 0:01:18.251 ****** 2026-01-01 00:52:07.503060 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503066 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503073 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503080 | orchestrator | 2026-01-01 00:52:07.503086 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-01 00:52:07.503093 | orchestrator | Thursday 01 January 2026 00:50:56 +0000 (0:00:00.390) 0:01:18.642 ****** 2026-01-01 00:52:07.503100 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503106 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503113 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503120 | orchestrator | 2026-01-01 00:52:07.503126 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-01 00:52:07.503141 | orchestrator | Thursday 01 January 2026 00:50:57 +0000 (0:00:00.373) 0:01:19.015 ****** 2026-01-01 00:52:07.503148 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503155 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503161 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503168 | orchestrator | 2026-01-01 00:52:07.503175 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-01 00:52:07.503181 | orchestrator | Thursday 01 January 2026 00:50:57 +0000 (0:00:00.363) 0:01:19.379 ****** 2026-01-01 00:52:07.503188 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503194 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503201 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503207 | orchestrator | 2026-01-01 00:52:07.503214 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-01 00:52:07.503221 | orchestrator | Thursday 01 January 2026 00:50:58 +0000 (0:00:00.606) 0:01:19.985 ****** 2026-01-01 00:52:07.503227 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503234 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503240 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503247 | orchestrator | 2026-01-01 00:52:07.503253 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-01 00:52:07.503260 | orchestrator | Thursday 01 January 2026 00:50:58 +0000 (0:00:00.352) 0:01:20.338 ****** 2026-01-01 00:52:07.503267 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503274 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503280 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503287 | orchestrator | 2026-01-01 00:52:07.503293 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-01 00:52:07.503300 | orchestrator | Thursday 01 January 2026 00:50:58 +0000 (0:00:00.337) 0:01:20.675 ****** 2026-01-01 00:52:07.503307 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503313 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503320 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503327 | orchestrator | 2026-01-01 00:52:07.503333 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-01 00:52:07.503340 | orchestrator | Thursday 01 January 2026 00:50:59 +0000 (0:00:00.459) 0:01:21.135 ****** 2026-01-01 00:52:07.503347 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503353 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503360 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503366 | orchestrator | 2026-01-01 00:52:07.503373 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-01 00:52:07.503380 | orchestrator | Thursday 01 January 2026 00:51:00 +0000 (0:00:00.636) 0:01:21.771 ****** 2026-01-01 00:52:07.503386 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503393 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503399 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503406 | orchestrator | 2026-01-01 00:52:07.503413 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-01 00:52:07.503419 | orchestrator | Thursday 01 January 2026 00:51:00 +0000 (0:00:00.348) 0:01:22.120 ****** 2026-01-01 00:52:07.503426 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503432 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503439 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503446 | orchestrator | 2026-01-01 00:52:07.503452 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-01 00:52:07.503459 | orchestrator | Thursday 01 January 2026 00:51:00 +0000 (0:00:00.365) 0:01:22.486 ****** 2026-01-01 00:52:07.503466 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503472 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503479 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503485 | orchestrator | 2026-01-01 00:52:07.503492 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-01 00:52:07.503503 | orchestrator | Thursday 01 January 2026 00:51:01 +0000 (0:00:00.339) 0:01:22.825 ****** 2026-01-01 00:52:07.503510 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503516 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503527 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503534 | orchestrator | 2026-01-01 00:52:07.503541 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-01 00:52:07.503548 | orchestrator | Thursday 01 January 2026 00:51:01 +0000 (0:00:00.313) 0:01:23.139 ****** 2026-01-01 00:52:07.503557 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:52:07.503564 | orchestrator | 2026-01-01 00:52:07.503571 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-01 00:52:07.503578 | orchestrator | Thursday 01 January 2026 00:51:02 +0000 (0:00:00.860) 0:01:24.000 ****** 2026-01-01 00:52:07.503584 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.503591 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.503598 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.503604 | orchestrator | 2026-01-01 00:52:07.503611 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-01 00:52:07.503617 | orchestrator | Thursday 01 January 2026 00:51:02 +0000 (0:00:00.483) 0:01:24.483 ****** 2026-01-01 00:52:07.503624 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.503631 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.503637 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.503644 | orchestrator | 2026-01-01 00:52:07.503651 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-01 00:52:07.503657 | orchestrator | Thursday 01 January 2026 00:51:03 +0000 (0:00:00.632) 0:01:25.116 ****** 2026-01-01 00:52:07.503664 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503670 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503677 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503683 | orchestrator | 2026-01-01 00:52:07.503690 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-01 00:52:07.503697 | orchestrator | Thursday 01 January 2026 00:51:04 +0000 (0:00:00.686) 0:01:25.802 ****** 2026-01-01 00:52:07.503703 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503710 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503716 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503723 | orchestrator | 2026-01-01 00:52:07.503744 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-01 00:52:07.503751 | orchestrator | Thursday 01 January 2026 00:51:04 +0000 (0:00:00.496) 0:01:26.299 ****** 2026-01-01 00:52:07.503758 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503764 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503771 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503777 | orchestrator | 2026-01-01 00:52:07.503784 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-01 00:52:07.503791 | orchestrator | Thursday 01 January 2026 00:51:04 +0000 (0:00:00.339) 0:01:26.638 ****** 2026-01-01 00:52:07.503797 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503804 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503811 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503817 | orchestrator | 2026-01-01 00:52:07.503824 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-01 00:52:07.503831 | orchestrator | Thursday 01 January 2026 00:51:05 +0000 (0:00:00.380) 0:01:27.019 ****** 2026-01-01 00:52:07.503837 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503844 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503851 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503857 | orchestrator | 2026-01-01 00:52:07.503864 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-01 00:52:07.503871 | orchestrator | Thursday 01 January 2026 00:51:05 +0000 (0:00:00.625) 0:01:27.645 ****** 2026-01-01 00:52:07.503882 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.503888 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.503895 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.503902 | orchestrator | 2026-01-01 00:52:07.503908 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-01 00:52:07.503915 | orchestrator | Thursday 01 January 2026 00:51:06 +0000 (0:00:00.396) 0:01:28.042 ****** 2026-01-01 00:52:07.503922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.503996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504007 | orchestrator | 2026-01-01 00:52:07.504014 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-01 00:52:07.504021 | orchestrator | Thursday 01 January 2026 00:51:08 +0000 (0:00:01.735) 0:01:29.777 ****** 2026-01-01 00:52:07.504028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504102 | orchestrator | 2026-01-01 00:52:07.504109 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-01 00:52:07.504116 | orchestrator | Thursday 01 January 2026 00:51:12 +0000 (0:00:04.443) 0:01:34.221 ****** 2026-01-01 00:52:07.504123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504198 | orchestrator | 2026-01-01 00:52:07.504204 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:52:07.504211 | orchestrator | Thursday 01 January 2026 00:51:15 +0000 (0:00:02.872) 0:01:37.094 ****** 2026-01-01 00:52:07.504218 | orchestrator | 2026-01-01 00:52:07.504225 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:52:07.504231 | orchestrator | Thursday 01 January 2026 00:51:15 +0000 (0:00:00.143) 0:01:37.238 ****** 2026-01-01 00:52:07.504238 | orchestrator | 2026-01-01 00:52:07.504245 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:52:07.504251 | orchestrator | Thursday 01 January 2026 00:51:15 +0000 (0:00:00.073) 0:01:37.312 ****** 2026-01-01 00:52:07.504258 | orchestrator | 2026-01-01 00:52:07.504265 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-01 00:52:07.504271 | orchestrator | Thursday 01 January 2026 00:51:15 +0000 (0:00:00.100) 0:01:37.412 ****** 2026-01-01 00:52:07.504278 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.504285 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.504291 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.504298 | orchestrator | 2026-01-01 00:52:07.504305 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-01 00:52:07.504311 | orchestrator | Thursday 01 January 2026 00:51:19 +0000 (0:00:03.624) 0:01:41.037 ****** 2026-01-01 00:52:07.504318 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.504325 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.504331 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.504338 | orchestrator | 2026-01-01 00:52:07.504345 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-01 00:52:07.504351 | orchestrator | Thursday 01 January 2026 00:51:21 +0000 (0:00:02.598) 0:01:43.635 ****** 2026-01-01 00:52:07.504358 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.504365 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.504371 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.504378 | orchestrator | 2026-01-01 00:52:07.504385 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-01 00:52:07.504391 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:03.038) 0:01:46.673 ****** 2026-01-01 00:52:07.504398 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.504405 | orchestrator | 2026-01-01 00:52:07.504411 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-01 00:52:07.504418 | orchestrator | Thursday 01 January 2026 00:51:25 +0000 (0:00:00.132) 0:01:46.806 ****** 2026-01-01 00:52:07.504425 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.504431 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.504438 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.504444 | orchestrator | 2026-01-01 00:52:07.504451 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-01 00:52:07.504458 | orchestrator | Thursday 01 January 2026 00:51:26 +0000 (0:00:00.930) 0:01:47.737 ****** 2026-01-01 00:52:07.504465 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.504471 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.504478 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.504484 | orchestrator | 2026-01-01 00:52:07.504492 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-01 00:52:07.504503 | orchestrator | Thursday 01 January 2026 00:51:26 +0000 (0:00:00.728) 0:01:48.466 ****** 2026-01-01 00:52:07.504515 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.504524 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.504531 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.504537 | orchestrator | 2026-01-01 00:52:07.504544 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-01 00:52:07.504551 | orchestrator | Thursday 01 January 2026 00:51:27 +0000 (0:00:00.972) 0:01:49.438 ****** 2026-01-01 00:52:07.504557 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.504568 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.504575 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.504581 | orchestrator | 2026-01-01 00:52:07.504588 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-01 00:52:07.504595 | orchestrator | Thursday 01 January 2026 00:51:29 +0000 (0:00:01.297) 0:01:50.736 ****** 2026-01-01 00:52:07.504601 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.504608 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.504619 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.504626 | orchestrator | 2026-01-01 00:52:07.504632 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-01 00:52:07.504639 | orchestrator | Thursday 01 January 2026 00:51:30 +0000 (0:00:01.143) 0:01:51.879 ****** 2026-01-01 00:52:07.504646 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.504655 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.504662 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.504669 | orchestrator | 2026-01-01 00:52:07.504675 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-01 00:52:07.504682 | orchestrator | Thursday 01 January 2026 00:51:31 +0000 (0:00:00.956) 0:01:52.836 ****** 2026-01-01 00:52:07.504689 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.504695 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.504702 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.504709 | orchestrator | 2026-01-01 00:52:07.504715 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-01 00:52:07.504722 | orchestrator | Thursday 01 January 2026 00:51:31 +0000 (0:00:00.329) 0:01:53.165 ****** 2026-01-01 00:52:07.504742 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504771 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504778 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504789 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504815 | orchestrator | 2026-01-01 00:52:07.504822 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-01 00:52:07.504831 | orchestrator | Thursday 01 January 2026 00:51:33 +0000 (0:00:01.726) 0:01:54.892 ****** 2026-01-01 00:52:07.504839 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504846 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504884 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504905 | orchestrator | 2026-01-01 00:52:07.504912 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-01 00:52:07.504919 | orchestrator | Thursday 01 January 2026 00:51:37 +0000 (0:00:03.918) 0:01:58.811 ****** 2026-01-01 00:52:07.504929 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504950 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504970 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.504995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 00:52:07.505002 | orchestrator | 2026-01-01 00:52:07.505009 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:52:07.505015 | orchestrator | Thursday 01 January 2026 00:51:40 +0000 (0:00:02.882) 0:02:01.694 ****** 2026-01-01 00:52:07.505022 | orchestrator | 2026-01-01 00:52:07.505029 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:52:07.505035 | orchestrator | Thursday 01 January 2026 00:51:40 +0000 (0:00:00.080) 0:02:01.774 ****** 2026-01-01 00:52:07.505042 | orchestrator | 2026-01-01 00:52:07.505049 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-01 00:52:07.505055 | orchestrator | Thursday 01 January 2026 00:51:40 +0000 (0:00:00.094) 0:02:01.869 ****** 2026-01-01 00:52:07.505062 | orchestrator | 2026-01-01 00:52:07.505068 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-01 00:52:07.505075 | orchestrator | Thursday 01 January 2026 00:51:40 +0000 (0:00:00.155) 0:02:02.024 ****** 2026-01-01 00:52:07.505082 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.505088 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.505095 | orchestrator | 2026-01-01 00:52:07.505105 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-01 00:52:07.505112 | orchestrator | Thursday 01 January 2026 00:51:47 +0000 (0:00:06.980) 0:02:09.005 ****** 2026-01-01 00:52:07.505119 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.505126 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.505132 | orchestrator | 2026-01-01 00:52:07.505142 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-01 00:52:07.505149 | orchestrator | Thursday 01 January 2026 00:51:53 +0000 (0:00:06.247) 0:02:15.253 ****** 2026-01-01 00:52:07.505156 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:52:07.505162 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:52:07.505169 | orchestrator | 2026-01-01 00:52:07.505175 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-01 00:52:07.505182 | orchestrator | Thursday 01 January 2026 00:52:00 +0000 (0:00:06.684) 0:02:21.937 ****** 2026-01-01 00:52:07.505189 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:52:07.505195 | orchestrator | 2026-01-01 00:52:07.505202 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-01 00:52:07.505209 | orchestrator | Thursday 01 January 2026 00:52:00 +0000 (0:00:00.149) 0:02:22.087 ****** 2026-01-01 00:52:07.505215 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.505222 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.505229 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.505235 | orchestrator | 2026-01-01 00:52:07.505242 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-01 00:52:07.505248 | orchestrator | Thursday 01 January 2026 00:52:01 +0000 (0:00:00.722) 0:02:22.810 ****** 2026-01-01 00:52:07.505259 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.505266 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.505273 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.505279 | orchestrator | 2026-01-01 00:52:07.505286 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-01 00:52:07.505293 | orchestrator | Thursday 01 January 2026 00:52:01 +0000 (0:00:00.560) 0:02:23.370 ****** 2026-01-01 00:52:07.505299 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.505306 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.505313 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.505319 | orchestrator | 2026-01-01 00:52:07.505326 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-01 00:52:07.505333 | orchestrator | Thursday 01 January 2026 00:52:02 +0000 (0:00:00.796) 0:02:24.166 ****** 2026-01-01 00:52:07.505339 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:52:07.505346 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:52:07.505352 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:52:07.505359 | orchestrator | 2026-01-01 00:52:07.505366 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-01 00:52:07.505372 | orchestrator | Thursday 01 January 2026 00:52:03 +0000 (0:00:00.678) 0:02:24.845 ****** 2026-01-01 00:52:07.505379 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.505386 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.505392 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.505399 | orchestrator | 2026-01-01 00:52:07.505406 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-01 00:52:07.505412 | orchestrator | Thursday 01 January 2026 00:52:03 +0000 (0:00:00.739) 0:02:25.584 ****** 2026-01-01 00:52:07.505419 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:52:07.505425 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:52:07.505432 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:52:07.505439 | orchestrator | 2026-01-01 00:52:07.505445 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:52:07.505452 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-01 00:52:07.505459 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-01 00:52:07.505466 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-01 00:52:07.505473 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:52:07.505479 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:52:07.505486 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 00:52:07.505493 | orchestrator | 2026-01-01 00:52:07.505499 | orchestrator | 2026-01-01 00:52:07.505506 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:52:07.505513 | orchestrator | Thursday 01 January 2026 00:52:04 +0000 (0:00:00.876) 0:02:26.460 ****** 2026-01-01 00:52:07.505519 | orchestrator | =============================================================================== 2026-01-01 00:52:07.505526 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.81s 2026-01-01 00:52:07.505533 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.77s 2026-01-01 00:52:07.505539 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 10.61s 2026-01-01 00:52:07.505546 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.72s 2026-01-01 00:52:07.505552 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.85s 2026-01-01 00:52:07.505564 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.44s 2026-01-01 00:52:07.505571 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2026-01-01 00:52:07.505581 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.39s 2026-01-01 00:52:07.505588 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 3.14s 2026-01-01 00:52:07.505595 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.88s 2026-01-01 00:52:07.505605 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.87s 2026-01-01 00:52:07.505611 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.23s 2026-01-01 00:52:07.505618 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.03s 2026-01-01 00:52:07.505625 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.86s 2026-01-01 00:52:07.505631 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.74s 2026-01-01 00:52:07.505638 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.73s 2026-01-01 00:52:07.505645 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.53s 2026-01-01 00:52:07.505651 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.45s 2026-01-01 00:52:07.505658 | orchestrator | ovn-db : Configure OVN SB connection settings --------------------------- 1.30s 2026-01-01 00:52:07.505665 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2026-01-01 00:52:07.505671 | orchestrator | 2026-01-01 00:52:07 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:07.505678 | orchestrator | 2026-01-01 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:10.541634 | orchestrator | 2026-01-01 00:52:10 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:10.544374 | orchestrator | 2026-01-01 00:52:10 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:10.544409 | orchestrator | 2026-01-01 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:13.595922 | orchestrator | 2026-01-01 00:52:13 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:13.596451 | orchestrator | 2026-01-01 00:52:13 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:13.596482 | orchestrator | 2026-01-01 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:16.645509 | orchestrator | 2026-01-01 00:52:16 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:16.646980 | orchestrator | 2026-01-01 00:52:16 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:16.647027 | orchestrator | 2026-01-01 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:19.695352 | orchestrator | 2026-01-01 00:52:19 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:19.697071 | orchestrator | 2026-01-01 00:52:19 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:19.697156 | orchestrator | 2026-01-01 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:22.745121 | orchestrator | 2026-01-01 00:52:22 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:22.748909 | orchestrator | 2026-01-01 00:52:22 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:22.748959 | orchestrator | 2026-01-01 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:25.812949 | orchestrator | 2026-01-01 00:52:25 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:25.815156 | orchestrator | 2026-01-01 00:52:25 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:25.815213 | orchestrator | 2026-01-01 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:28.863860 | orchestrator | 2026-01-01 00:52:28 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:28.864456 | orchestrator | 2026-01-01 00:52:28 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:28.864633 | orchestrator | 2026-01-01 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:31.922076 | orchestrator | 2026-01-01 00:52:31 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:31.924250 | orchestrator | 2026-01-01 00:52:31 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:31.924375 | orchestrator | 2026-01-01 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:34.972207 | orchestrator | 2026-01-01 00:52:34 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:34.973983 | orchestrator | 2026-01-01 00:52:34 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:34.973999 | orchestrator | 2026-01-01 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:38.030714 | orchestrator | 2026-01-01 00:52:38 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:38.031530 | orchestrator | 2026-01-01 00:52:38 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:38.031558 | orchestrator | 2026-01-01 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:41.073485 | orchestrator | 2026-01-01 00:52:41 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:41.075975 | orchestrator | 2026-01-01 00:52:41 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:41.076006 | orchestrator | 2026-01-01 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:44.119890 | orchestrator | 2026-01-01 00:52:44 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:44.121856 | orchestrator | 2026-01-01 00:52:44 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:44.121905 | orchestrator | 2026-01-01 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:47.181658 | orchestrator | 2026-01-01 00:52:47 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:47.183260 | orchestrator | 2026-01-01 00:52:47 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:47.183314 | orchestrator | 2026-01-01 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:50.255108 | orchestrator | 2026-01-01 00:52:50 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:50.255649 | orchestrator | 2026-01-01 00:52:50 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:50.255701 | orchestrator | 2026-01-01 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:53.308495 | orchestrator | 2026-01-01 00:52:53 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:53.310916 | orchestrator | 2026-01-01 00:52:53 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:53.311000 | orchestrator | 2026-01-01 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:56.356698 | orchestrator | 2026-01-01 00:52:56 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:56.359846 | orchestrator | 2026-01-01 00:52:56 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:56.359889 | orchestrator | 2026-01-01 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:52:59.397002 | orchestrator | 2026-01-01 00:52:59 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:52:59.397532 | orchestrator | 2026-01-01 00:52:59 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:52:59.397561 | orchestrator | 2026-01-01 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:02.426989 | orchestrator | 2026-01-01 00:53:02 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:02.427115 | orchestrator | 2026-01-01 00:53:02 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:02.427138 | orchestrator | 2026-01-01 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:05.460110 | orchestrator | 2026-01-01 00:53:05 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:05.462206 | orchestrator | 2026-01-01 00:53:05 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:05.462236 | orchestrator | 2026-01-01 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:08.520502 | orchestrator | 2026-01-01 00:53:08 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:08.522447 | orchestrator | 2026-01-01 00:53:08 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:08.522488 | orchestrator | 2026-01-01 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:11.572542 | orchestrator | 2026-01-01 00:53:11 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:11.574138 | orchestrator | 2026-01-01 00:53:11 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:11.574177 | orchestrator | 2026-01-01 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:14.632017 | orchestrator | 2026-01-01 00:53:14 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:14.632180 | orchestrator | 2026-01-01 00:53:14 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:14.632193 | orchestrator | 2026-01-01 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:17.666833 | orchestrator | 2026-01-01 00:53:17 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:17.667873 | orchestrator | 2026-01-01 00:53:17 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:17.667933 | orchestrator | 2026-01-01 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:20.720124 | orchestrator | 2026-01-01 00:53:20 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:20.722604 | orchestrator | 2026-01-01 00:53:20 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:20.722686 | orchestrator | 2026-01-01 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:23.771755 | orchestrator | 2026-01-01 00:53:23 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:23.773673 | orchestrator | 2026-01-01 00:53:23 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:23.774089 | orchestrator | 2026-01-01 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:26.832287 | orchestrator | 2026-01-01 00:53:26 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:26.834130 | orchestrator | 2026-01-01 00:53:26 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:26.834198 | orchestrator | 2026-01-01 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:29.884211 | orchestrator | 2026-01-01 00:53:29 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:29.887623 | orchestrator | 2026-01-01 00:53:29 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:29.888164 | orchestrator | 2026-01-01 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:32.943940 | orchestrator | 2026-01-01 00:53:32 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:32.945984 | orchestrator | 2026-01-01 00:53:32 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:32.946511 | orchestrator | 2026-01-01 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:35.990407 | orchestrator | 2026-01-01 00:53:35 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:35.991910 | orchestrator | 2026-01-01 00:53:35 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:35.992215 | orchestrator | 2026-01-01 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:39.048245 | orchestrator | 2026-01-01 00:53:39 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:39.054391 | orchestrator | 2026-01-01 00:53:39 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:39.054571 | orchestrator | 2026-01-01 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:42.094555 | orchestrator | 2026-01-01 00:53:42 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:42.096488 | orchestrator | 2026-01-01 00:53:42 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:42.096538 | orchestrator | 2026-01-01 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:45.135091 | orchestrator | 2026-01-01 00:53:45 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:45.135721 | orchestrator | 2026-01-01 00:53:45 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:45.135759 | orchestrator | 2026-01-01 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:48.185571 | orchestrator | 2026-01-01 00:53:48 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:48.186348 | orchestrator | 2026-01-01 00:53:48 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:48.186385 | orchestrator | 2026-01-01 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:51.243300 | orchestrator | 2026-01-01 00:53:51 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:51.244568 | orchestrator | 2026-01-01 00:53:51 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:51.245120 | orchestrator | 2026-01-01 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:54.287649 | orchestrator | 2026-01-01 00:53:54 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:54.292456 | orchestrator | 2026-01-01 00:53:54 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:54.292552 | orchestrator | 2026-01-01 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:53:57.334814 | orchestrator | 2026-01-01 00:53:57 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:53:57.336609 | orchestrator | 2026-01-01 00:53:57 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:53:57.336642 | orchestrator | 2026-01-01 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:00.377002 | orchestrator | 2026-01-01 00:54:00 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:00.377088 | orchestrator | 2026-01-01 00:54:00 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:00.377100 | orchestrator | 2026-01-01 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:03.413203 | orchestrator | 2026-01-01 00:54:03 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:03.414822 | orchestrator | 2026-01-01 00:54:03 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:03.414912 | orchestrator | 2026-01-01 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:06.454800 | orchestrator | 2026-01-01 00:54:06 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:06.456568 | orchestrator | 2026-01-01 00:54:06 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:06.456604 | orchestrator | 2026-01-01 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:09.499849 | orchestrator | 2026-01-01 00:54:09 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:09.499991 | orchestrator | 2026-01-01 00:54:09 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:09.500023 | orchestrator | 2026-01-01 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:12.555894 | orchestrator | 2026-01-01 00:54:12 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:12.557522 | orchestrator | 2026-01-01 00:54:12 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:12.557548 | orchestrator | 2026-01-01 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:15.593632 | orchestrator | 2026-01-01 00:54:15 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:15.593922 | orchestrator | 2026-01-01 00:54:15 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:15.593944 | orchestrator | 2026-01-01 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:18.634355 | orchestrator | 2026-01-01 00:54:18 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:18.635298 | orchestrator | 2026-01-01 00:54:18 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:18.635336 | orchestrator | 2026-01-01 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:21.678760 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:21.680384 | orchestrator | 2026-01-01 00:54:21 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:21.680744 | orchestrator | 2026-01-01 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:24.724495 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:24.725855 | orchestrator | 2026-01-01 00:54:24 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:24.726425 | orchestrator | 2026-01-01 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:27.775222 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:27.776439 | orchestrator | 2026-01-01 00:54:27 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:27.776541 | orchestrator | 2026-01-01 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:30.823096 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:30.823197 | orchestrator | 2026-01-01 00:54:30 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:30.823213 | orchestrator | 2026-01-01 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:33.876229 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:33.878975 | orchestrator | 2026-01-01 00:54:33 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:33.879041 | orchestrator | 2026-01-01 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:36.919767 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:36.920291 | orchestrator | 2026-01-01 00:54:36 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:36.920329 | orchestrator | 2026-01-01 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:39.974660 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:39.976823 | orchestrator | 2026-01-01 00:54:39 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:39.976851 | orchestrator | 2026-01-01 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:43.015939 | orchestrator | 2026-01-01 00:54:43 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:43.017508 | orchestrator | 2026-01-01 00:54:43 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:43.017560 | orchestrator | 2026-01-01 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:46.062832 | orchestrator | 2026-01-01 00:54:46 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:46.064362 | orchestrator | 2026-01-01 00:54:46 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:46.064461 | orchestrator | 2026-01-01 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:49.116874 | orchestrator | 2026-01-01 00:54:49 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:49.116994 | orchestrator | 2026-01-01 00:54:49 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:49.117009 | orchestrator | 2026-01-01 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:52.156989 | orchestrator | 2026-01-01 00:54:52 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:52.157095 | orchestrator | 2026-01-01 00:54:52 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:52.157136 | orchestrator | 2026-01-01 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:55.190880 | orchestrator | 2026-01-01 00:54:55 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:55.191378 | orchestrator | 2026-01-01 00:54:55 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:55.191406 | orchestrator | 2026-01-01 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:54:58.232497 | orchestrator | 2026-01-01 00:54:58 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:54:58.232741 | orchestrator | 2026-01-01 00:54:58 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:54:58.232766 | orchestrator | 2026-01-01 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:01.278758 | orchestrator | 2026-01-01 00:55:01 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:01.280901 | orchestrator | 2026-01-01 00:55:01 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:01.280933 | orchestrator | 2026-01-01 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:04.326895 | orchestrator | 2026-01-01 00:55:04 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:04.328996 | orchestrator | 2026-01-01 00:55:04 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:04.329255 | orchestrator | 2026-01-01 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:07.382896 | orchestrator | 2026-01-01 00:55:07 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:07.383006 | orchestrator | 2026-01-01 00:55:07 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:07.383029 | orchestrator | 2026-01-01 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:10.416129 | orchestrator | 2026-01-01 00:55:10 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:10.416411 | orchestrator | 2026-01-01 00:55:10 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:10.416425 | orchestrator | 2026-01-01 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:13.454237 | orchestrator | 2026-01-01 00:55:13 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:13.454883 | orchestrator | 2026-01-01 00:55:13 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:13.454913 | orchestrator | 2026-01-01 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:16.499495 | orchestrator | 2026-01-01 00:55:16 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:16.501143 | orchestrator | 2026-01-01 00:55:16 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:16.501406 | orchestrator | 2026-01-01 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:19.549391 | orchestrator | 2026-01-01 00:55:19 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:19.552503 | orchestrator | 2026-01-01 00:55:19 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state STARTED 2026-01-01 00:55:19.552548 | orchestrator | 2026-01-01 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:22.594402 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:22.595163 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:22.596062 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:22.603069 | orchestrator | 2026-01-01 00:55:22 | INFO  | Task 26e29c9e-9437-4ce6-9993-c19eb8d6457a is in state SUCCESS 2026-01-01 00:55:22.605728 | orchestrator | 2026-01-01 00:55:22.605790 | orchestrator | 2026-01-01 00:55:22.605813 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:55:22.605835 | orchestrator | 2026-01-01 00:55:22.605855 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:55:22.605876 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.710) 0:00:00.710 ****** 2026-01-01 00:55:22.605896 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.605918 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.605938 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.605958 | orchestrator | 2026-01-01 00:55:22.605978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:55:22.605999 | orchestrator | Thursday 01 January 2026 00:48:19 +0000 (0:00:01.040) 0:00:01.751 ****** 2026-01-01 00:55:22.606091 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-01 00:55:22.606117 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-01 00:55:22.606138 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-01 00:55:22.606158 | orchestrator | 2026-01-01 00:55:22.606178 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-01 00:55:22.606199 | orchestrator | 2026-01-01 00:55:22.606220 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-01 00:55:22.606240 | orchestrator | Thursday 01 January 2026 00:48:21 +0000 (0:00:01.667) 0:00:03.418 ****** 2026-01-01 00:55:22.606261 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.606284 | orchestrator | 2026-01-01 00:55:22.606390 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-01 00:55:22.606411 | orchestrator | Thursday 01 January 2026 00:48:22 +0000 (0:00:01.554) 0:00:04.973 ****** 2026-01-01 00:55:22.606432 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.607104 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.607122 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.607133 | orchestrator | 2026-01-01 00:55:22.607145 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-01 00:55:22.607156 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:00.678) 0:00:05.651 ****** 2026-01-01 00:55:22.607174 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.607192 | orchestrator | 2026-01-01 00:55:22.607210 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-01 00:55:22.607228 | orchestrator | Thursday 01 January 2026 00:48:24 +0000 (0:00:01.579) 0:00:07.230 ****** 2026-01-01 00:55:22.607247 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.607266 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.607286 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.607306 | orchestrator | 2026-01-01 00:55:22.607327 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-01 00:55:22.607368 | orchestrator | Thursday 01 January 2026 00:48:25 +0000 (0:00:01.025) 0:00:08.256 ****** 2026-01-01 00:55:22.607390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:55:22.607408 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:55:22.610273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:55:22.610295 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:55:22.610336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:55:22.610348 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-01 00:55:22.610361 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-01 00:55:22.610368 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-01 00:55:22.610374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-01 00:55:22.610381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-01 00:55:22.610387 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-01 00:55:22.610393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-01 00:55:22.610399 | orchestrator | 2026-01-01 00:55:22.610406 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-01 00:55:22.610412 | orchestrator | Thursday 01 January 2026 00:48:30 +0000 (0:00:04.402) 0:00:12.659 ****** 2026-01-01 00:55:22.610419 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-01 00:55:22.610426 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-01 00:55:22.610432 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-01 00:55:22.610439 | orchestrator | 2026-01-01 00:55:22.610445 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-01 00:55:22.610451 | orchestrator | Thursday 01 January 2026 00:48:31 +0000 (0:00:00.849) 0:00:13.508 ****** 2026-01-01 00:55:22.610457 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-01 00:55:22.610464 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-01 00:55:22.610470 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-01 00:55:22.610476 | orchestrator | 2026-01-01 00:55:22.610482 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-01 00:55:22.610488 | orchestrator | Thursday 01 January 2026 00:48:32 +0000 (0:00:01.513) 0:00:15.022 ****** 2026-01-01 00:55:22.610494 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-01 00:55:22.610501 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.610523 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-01 00:55:22.610530 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.610536 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-01 00:55:22.610542 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.610548 | orchestrator | 2026-01-01 00:55:22.610554 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-01 00:55:22.610561 | orchestrator | Thursday 01 January 2026 00:48:33 +0000 (0:00:00.858) 0:00:15.880 ****** 2026-01-01 00:55:22.610571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.610588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.610665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.610719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.610730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.610748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.610757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.610765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.610773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.610790 | orchestrator | 2026-01-01 00:55:22.610798 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-01 00:55:22.610805 | orchestrator | Thursday 01 January 2026 00:48:36 +0000 (0:00:02.826) 0:00:18.707 ****** 2026-01-01 00:55:22.610817 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.610829 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.610841 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.610852 | orchestrator | 2026-01-01 00:55:22.610864 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-01 00:55:22.610881 | orchestrator | Thursday 01 January 2026 00:48:37 +0000 (0:00:01.497) 0:00:20.204 ****** 2026-01-01 00:55:22.610893 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-01 00:55:22.610906 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-01 00:55:22.610918 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-01 00:55:22.610930 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-01 00:55:22.610942 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-01 00:55:22.610951 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-01 00:55:22.610958 | orchestrator | 2026-01-01 00:55:22.610966 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-01 00:55:22.610973 | orchestrator | Thursday 01 January 2026 00:48:40 +0000 (0:00:02.764) 0:00:22.969 ****** 2026-01-01 00:55:22.610980 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.610987 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.610994 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.611001 | orchestrator | 2026-01-01 00:55:22.611008 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-01 00:55:22.611015 | orchestrator | Thursday 01 January 2026 00:48:42 +0000 (0:00:02.178) 0:00:25.147 ****** 2026-01-01 00:55:22.611022 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.611030 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.611037 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.611044 | orchestrator | 2026-01-01 00:55:22.611051 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-01 00:55:22.611058 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:04.963) 0:00:30.110 ****** 2026-01-01 00:55:22.611066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.611082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.611100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.611114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:55:22.611127 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.611145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.611159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.611171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.611184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:55:22.611196 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.611217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.611238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.611252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.611269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:55:22.611282 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.611294 | orchestrator | 2026-01-01 00:55:22.611307 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-01 00:55:22.611319 | orchestrator | Thursday 01 January 2026 00:48:48 +0000 (0:00:00.987) 0:00:31.098 ****** 2026-01-01 00:55:22.611330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.611410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:55:22.611424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.611448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.611492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:55:22.611505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b', '__omit_place_holder__848cc8c22d38327a194029aa783235eedf51000b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-01 00:55:22.611517 | orchestrator | 2026-01-01 00:55:22.611530 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-01 00:55:22.611544 | orchestrator | Thursday 01 January 2026 00:48:52 +0000 (0:00:03.445) 0:00:34.544 ****** 2026-01-01 00:55:22.611563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.611703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.611717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.611730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.611751 | orchestrator | 2026-01-01 00:55:22.611763 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-01 00:55:22.611775 | orchestrator | Thursday 01 January 2026 00:48:56 +0000 (0:00:04.254) 0:00:38.798 ****** 2026-01-01 00:55:22.611788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-01 00:55:22.611800 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-01 00:55:22.611813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-01 00:55:22.611825 | orchestrator | 2026-01-01 00:55:22.611838 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-01 00:55:22.611850 | orchestrator | Thursday 01 January 2026 00:49:00 +0000 (0:00:04.417) 0:00:43.216 ****** 2026-01-01 00:55:22.611862 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-01 00:55:22.611875 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-01 00:55:22.611886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-01 00:55:22.611899 | orchestrator | 2026-01-01 00:55:22.611919 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-01 00:55:22.611927 | orchestrator | Thursday 01 January 2026 00:49:05 +0000 (0:00:04.135) 0:00:47.351 ****** 2026-01-01 00:55:22.611934 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.611941 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.611948 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.611955 | orchestrator | 2026-01-01 00:55:22.611962 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-01 00:55:22.611969 | orchestrator | Thursday 01 January 2026 00:49:05 +0000 (0:00:00.689) 0:00:48.040 ****** 2026-01-01 00:55:22.611977 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-01 00:55:22.611985 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-01 00:55:22.611992 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-01 00:55:22.611999 | orchestrator | 2026-01-01 00:55:22.612007 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-01 00:55:22.612014 | orchestrator | Thursday 01 January 2026 00:49:08 +0000 (0:00:02.634) 0:00:50.675 ****** 2026-01-01 00:55:22.612021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-01 00:55:22.612028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-01 00:55:22.612035 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-01 00:55:22.612042 | orchestrator | 2026-01-01 00:55:22.612050 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-01 00:55:22.612057 | orchestrator | Thursday 01 January 2026 00:49:11 +0000 (0:00:02.877) 0:00:53.552 ****** 2026-01-01 00:55:22.612064 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-01 00:55:22.612071 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-01 00:55:22.612078 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-01 00:55:22.612085 | orchestrator | 2026-01-01 00:55:22.612092 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-01 00:55:22.612100 | orchestrator | Thursday 01 January 2026 00:49:12 +0000 (0:00:01.620) 0:00:55.173 ****** 2026-01-01 00:55:22.612107 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-01 00:55:22.612118 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-01 00:55:22.612132 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-01 00:55:22.612139 | orchestrator | 2026-01-01 00:55:22.612146 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-01 00:55:22.612153 | orchestrator | Thursday 01 January 2026 00:49:14 +0000 (0:00:01.632) 0:00:56.805 ****** 2026-01-01 00:55:22.612160 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.612167 | orchestrator | 2026-01-01 00:55:22.612175 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-01 00:55:22.612182 | orchestrator | Thursday 01 January 2026 00:49:15 +0000 (0:00:00.995) 0:00:57.801 ****** 2026-01-01 00:55:22.612189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.612197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.612210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.612218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.612226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.612241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.612249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.612259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.612271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.612283 | orchestrator | 2026-01-01 00:55:22.612295 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-01 00:55:22.612307 | orchestrator | Thursday 01 January 2026 00:49:19 +0000 (0:00:03.838) 0:01:01.639 ****** 2026-01-01 00:55:22.612322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612350 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.612361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612403 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.612411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612423 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.612430 | orchestrator | 2026-01-01 00:55:22.612438 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-01 00:55:22.612445 | orchestrator | Thursday 01 January 2026 00:49:20 +0000 (0:00:00.881) 0:01:02.521 ****** 2026-01-01 00:55:22.612456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612479 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.612486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612518 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.612525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612552 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.612560 | orchestrator | 2026-01-01 00:55:22.612567 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-01 00:55:22.612575 | orchestrator | Thursday 01 January 2026 00:49:21 +0000 (0:00:01.126) 0:01:03.647 ****** 2026-01-01 00:55:22.612582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612631 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.612667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612677 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.612684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612712 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.612724 | orchestrator | 2026-01-01 00:55:22.612732 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-01 00:55:22.612739 | orchestrator | Thursday 01 January 2026 00:49:22 +0000 (0:00:01.272) 0:01:04.920 ****** 2026-01-01 00:55:22.612746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612776 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.612784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612820 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.612846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612880 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.612892 | orchestrator | 2026-01-01 00:55:22.612902 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-01 00:55:22.612915 | orchestrator | Thursday 01 January 2026 00:49:23 +0000 (0:00:00.873) 0:01:05.794 ****** 2026-01-01 00:55:22.612933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.612947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.612959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.612972 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.612992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613015 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.613026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613049 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.613056 | orchestrator | 2026-01-01 00:55:22.613063 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-01 00:55:22.613071 | orchestrator | Thursday 01 January 2026 00:49:25 +0000 (0:00:01.990) 0:01:07.784 ****** 2026-01-01 00:55:22.613084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613111 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.613119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613146 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.613153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613185 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.613192 | orchestrator | 2026-01-01 00:55:22.613200 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-01 00:55:22.613207 | orchestrator | Thursday 01 January 2026 00:49:27 +0000 (0:00:01.675) 0:01:09.460 ****** 2026-01-01 00:55:22.613214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613241 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.613248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613293 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.613305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613340 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.613352 | orchestrator | 2026-01-01 00:55:22.613364 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-01 00:55:22.613377 | orchestrator | Thursday 01 January 2026 00:49:27 +0000 (0:00:00.606) 0:01:10.067 ****** 2026-01-01 00:55:22.613389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613472 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.613479 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.613490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-01 00:55:22.613503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-01 00:55:22.613510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-01 00:55:22.613517 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.613524 | orchestrator | 2026-01-01 00:55:22.613532 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-01 00:55:22.613539 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:00.747) 0:01:10.814 ****** 2026-01-01 00:55:22.613546 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-01 00:55:22.613553 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-01 00:55:22.613565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-01 00:55:22.613572 | orchestrator | 2026-01-01 00:55:22.613579 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-01 00:55:22.613586 | orchestrator | Thursday 01 January 2026 00:49:30 +0000 (0:00:02.036) 0:01:12.850 ****** 2026-01-01 00:55:22.613594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-01 00:55:22.613601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-01 00:55:22.613608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-01 00:55:22.613615 | orchestrator | 2026-01-01 00:55:22.613622 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-01 00:55:22.613629 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:01.473) 0:01:14.324 ****** 2026-01-01 00:55:22.613636 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-01 00:55:22.613677 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-01 00:55:22.613684 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-01 00:55:22.613691 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-01 00:55:22.613699 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.613706 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-01 00:55:22.613713 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.613720 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-01 00:55:22.613738 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.613751 | orchestrator | 2026-01-01 00:55:22.613763 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-01 00:55:22.613775 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:00.830) 0:01:15.155 ****** 2026-01-01 00:55:22.613795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.613809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.613821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-01 00:55:22.613842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.613856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.613870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-01 00:55:22.613893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.613905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.613913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-01 00:55:22.613920 | orchestrator | 2026-01-01 00:55:22.613928 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-01 00:55:22.613935 | orchestrator | Thursday 01 January 2026 00:49:35 +0000 (0:00:02.870) 0:01:18.026 ****** 2026-01-01 00:55:22.613942 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.613949 | orchestrator | 2026-01-01 00:55:22.613956 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-01 00:55:22.613963 | orchestrator | Thursday 01 January 2026 00:49:36 +0000 (0:00:00.716) 0:01:18.742 ****** 2026-01-01 00:55:22.613972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-01 00:55:22.613985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.613993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-01 00:55:22.614056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.614064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-01 00:55:22.614103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.614115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614130 | orchestrator | 2026-01-01 00:55:22.614138 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-01 00:55:22.614145 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:05.421) 0:01:24.163 ****** 2026-01-01 00:55:22.614153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-01 00:55:22.614166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.614179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614194 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.614206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-01 00:55:22.614214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.614221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614236 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.614249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-01 00:55:22.614262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.614273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614288 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.614295 | orchestrator | 2026-01-01 00:55:22.614303 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-01 00:55:22.614310 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:01.557) 0:01:25.721 ****** 2026-01-01 00:55:22.614318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-01 00:55:22.614326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-01 00:55:22.614334 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.614341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-01 00:55:22.614349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-01 00:55:22.614356 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.614364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-01 00:55:22.614376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-01 00:55:22.614383 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.614391 | orchestrator | 2026-01-01 00:55:22.614410 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-01 00:55:22.614422 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:01.499) 0:01:27.221 ****** 2026-01-01 00:55:22.614434 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.614445 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.614458 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.614470 | orchestrator | 2026-01-01 00:55:22.614481 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-01 00:55:22.614492 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:01.833) 0:01:29.055 ****** 2026-01-01 00:55:22.614503 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.614514 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.614526 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.614538 | orchestrator | 2026-01-01 00:55:22.614549 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-01 00:55:22.614561 | orchestrator | Thursday 01 January 2026 00:49:49 +0000 (0:00:02.818) 0:01:31.874 ****** 2026-01-01 00:55:22.614572 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.614584 | orchestrator | 2026-01-01 00:55:22.614595 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-01 00:55:22.614607 | orchestrator | Thursday 01 January 2026 00:49:50 +0000 (0:00:00.851) 0:01:32.725 ****** 2026-01-01 00:55:22.614620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.614660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.614675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.614779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614815 | orchestrator | 2026-01-01 00:55:22.614823 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-01 00:55:22.614830 | orchestrator | Thursday 01 January 2026 00:49:54 +0000 (0:00:04.503) 0:01:37.228 ****** 2026-01-01 00:55:22.614844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.614852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614867 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.614878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.614886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.614910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614917 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.614925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.614940 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.614947 | orchestrator | 2026-01-01 00:55:22.614959 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-01 00:55:22.614966 | orchestrator | Thursday 01 January 2026 00:49:57 +0000 (0:00:02.905) 0:01:40.134 ****** 2026-01-01 00:55:22.614975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-01 00:55:22.614989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-01 00:55:22.615008 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-01 00:55:22.615031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-01 00:55:22.615042 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.615054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-01 00:55:22.615065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-01 00:55:22.615077 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.615088 | orchestrator | 2026-01-01 00:55:22.615098 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-01 00:55:22.615110 | orchestrator | Thursday 01 January 2026 00:50:00 +0000 (0:00:02.239) 0:01:42.376 ****** 2026-01-01 00:55:22.615121 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.615132 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.615143 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.615154 | orchestrator | 2026-01-01 00:55:22.615165 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-01 00:55:22.615176 | orchestrator | Thursday 01 January 2026 00:50:01 +0000 (0:00:01.591) 0:01:43.968 ****** 2026-01-01 00:55:22.615188 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.615201 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.615213 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.615225 | orchestrator | 2026-01-01 00:55:22.615244 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-01 00:55:22.615252 | orchestrator | Thursday 01 January 2026 00:50:04 +0000 (0:00:02.486) 0:01:46.454 ****** 2026-01-01 00:55:22.615259 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.615266 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615273 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.615281 | orchestrator | 2026-01-01 00:55:22.615288 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-01 00:55:22.615295 | orchestrator | Thursday 01 January 2026 00:50:04 +0000 (0:00:00.631) 0:01:47.086 ****** 2026-01-01 00:55:22.615302 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.615310 | orchestrator | 2026-01-01 00:55:22.615317 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-01 00:55:22.615324 | orchestrator | Thursday 01 January 2026 00:50:05 +0000 (0:00:01.035) 0:01:48.121 ****** 2026-01-01 00:55:22.615332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-01 00:55:22.615353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-01 00:55:22.615361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-01 00:55:22.615369 | orchestrator | 2026-01-01 00:55:22.615376 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-01 00:55:22.615383 | orchestrator | Thursday 01 January 2026 00:50:09 +0000 (0:00:03.165) 0:01:51.286 ****** 2026-01-01 00:55:22.615395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-01 00:55:22.615403 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.615411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-01 00:55:22.615418 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-01 00:55:22.615438 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.615446 | orchestrator | 2026-01-01 00:55:22.615457 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-01 00:55:22.615464 | orchestrator | Thursday 01 January 2026 00:50:11 +0000 (0:00:02.797) 0:01:54.084 ****** 2026-01-01 00:55:22.615473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:55:22.615482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:55:22.615490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:55:22.615499 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.615506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:55:22.615514 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:55:22.615533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-01 00:55:22.615541 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.615548 | orchestrator | 2026-01-01 00:55:22.615555 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-01 00:55:22.615567 | orchestrator | Thursday 01 January 2026 00:50:13 +0000 (0:00:02.189) 0:01:56.273 ****** 2026-01-01 00:55:22.615575 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.615582 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615589 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.615596 | orchestrator | 2026-01-01 00:55:22.615604 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-01 00:55:22.615611 | orchestrator | Thursday 01 January 2026 00:50:14 +0000 (0:00:00.792) 0:01:57.066 ****** 2026-01-01 00:55:22.615618 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.615626 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615633 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.615661 | orchestrator | 2026-01-01 00:55:22.615670 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-01 00:55:22.615677 | orchestrator | Thursday 01 January 2026 00:50:16 +0000 (0:00:01.535) 0:01:58.601 ****** 2026-01-01 00:55:22.615685 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.615692 | orchestrator | 2026-01-01 00:55:22.615699 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-01 00:55:22.615706 | orchestrator | Thursday 01 January 2026 00:50:17 +0000 (0:00:00.841) 0:01:59.443 ****** 2026-01-01 00:55:22.615717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.615730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.615798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.615830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615895 | orchestrator | 2026-01-01 00:55:22.615902 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-01 00:55:22.615909 | orchestrator | Thursday 01 January 2026 00:50:24 +0000 (0:00:07.377) 0:02:06.820 ****** 2026-01-01 00:55:22.615917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.615924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615959 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.615967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.615978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.615998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.616011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.616019 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.616026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.616037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.616045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.616053 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.616060 | orchestrator | 2026-01-01 00:55:22.616068 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-01 00:55:22.616075 | orchestrator | Thursday 01 January 2026 00:50:25 +0000 (0:00:01.341) 0:02:08.162 ****** 2026-01-01 00:55:22.616083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-01 00:55:22.616090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-01 00:55:22.616102 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.616110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-01 00:55:22.616117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-01 00:55:22.616124 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.616132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-01 00:55:22.616143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-01 00:55:22.616151 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.616158 | orchestrator | 2026-01-01 00:55:22.616165 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-01 00:55:22.616172 | orchestrator | Thursday 01 January 2026 00:50:27 +0000 (0:00:01.178) 0:02:09.340 ****** 2026-01-01 00:55:22.616179 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.616187 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.616193 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.616201 | orchestrator | 2026-01-01 00:55:22.616208 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-01 00:55:22.616215 | orchestrator | Thursday 01 January 2026 00:50:28 +0000 (0:00:01.687) 0:02:11.028 ****** 2026-01-01 00:55:22.616222 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.616229 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.616236 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.616243 | orchestrator | 2026-01-01 00:55:22.616250 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-01 00:55:22.616258 | orchestrator | Thursday 01 January 2026 00:50:31 +0000 (0:00:02.267) 0:02:13.296 ****** 2026-01-01 00:55:22.616265 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.616272 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.616279 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.616286 | orchestrator | 2026-01-01 00:55:22.616293 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-01 00:55:22.616301 | orchestrator | Thursday 01 January 2026 00:50:31 +0000 (0:00:00.553) 0:02:13.849 ****** 2026-01-01 00:55:22.616308 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.616315 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.616322 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.616329 | orchestrator | 2026-01-01 00:55:22.616336 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-01 00:55:22.616344 | orchestrator | Thursday 01 January 2026 00:50:31 +0000 (0:00:00.295) 0:02:14.145 ****** 2026-01-01 00:55:22.616351 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.616358 | orchestrator | 2026-01-01 00:55:22.616365 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-01 00:55:22.616373 | orchestrator | Thursday 01 January 2026 00:50:32 +0000 (0:00:00.926) 0:02:15.072 ****** 2026-01-01 00:55:22.616383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-01 00:55:22.616397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:55:22.616409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-01 00:55:22.616417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.616425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:55:22.616433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-01 00:55:22.617937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:55:22.617950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.617993 | orchestrator | 2026-01-01 00:55:22.618001 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-01 00:55:22.618007 | orchestrator | Thursday 01 January 2026 00:50:36 +0000 (0:00:03.878) 0:02:18.950 ****** 2026-01-01 00:55:22.618040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-01 00:55:22.618054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:55:22.618062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618105 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.618169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-01 00:55:22.618178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:55:22.618185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-01 00:55:22.618219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-01 00:55:22.618241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618267 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.618277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.618311 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.618317 | orchestrator | 2026-01-01 00:55:22.618324 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-01 00:55:22.618331 | orchestrator | Thursday 01 January 2026 00:50:37 +0000 (0:00:00.859) 0:02:19.810 ****** 2026-01-01 00:55:22.618338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-01 00:55:22.618347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-01 00:55:22.618359 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.618366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-01 00:55:22.618372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-01 00:55:22.618379 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.618386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-01 00:55:22.618393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-01 00:55:22.618399 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.618406 | orchestrator | 2026-01-01 00:55:22.618412 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-01 00:55:22.618419 | orchestrator | Thursday 01 January 2026 00:50:38 +0000 (0:00:01.080) 0:02:20.890 ****** 2026-01-01 00:55:22.618426 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.618432 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.618442 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.618449 | orchestrator | 2026-01-01 00:55:22.618456 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-01 00:55:22.618462 | orchestrator | Thursday 01 January 2026 00:50:40 +0000 (0:00:01.792) 0:02:22.683 ****** 2026-01-01 00:55:22.618469 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.618475 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.618482 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.618489 | orchestrator | 2026-01-01 00:55:22.618495 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-01 00:55:22.618502 | orchestrator | Thursday 01 January 2026 00:50:42 +0000 (0:00:01.865) 0:02:24.548 ****** 2026-01-01 00:55:22.618508 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.618515 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.618521 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.618528 | orchestrator | 2026-01-01 00:55:22.618535 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-01 00:55:22.618541 | orchestrator | Thursday 01 January 2026 00:50:42 +0000 (0:00:00.580) 0:02:25.129 ****** 2026-01-01 00:55:22.618548 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.618555 | orchestrator | 2026-01-01 00:55:22.618561 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-01 00:55:22.618568 | orchestrator | Thursday 01 January 2026 00:50:43 +0000 (0:00:00.886) 0:02:26.016 ****** 2026-01-01 00:55:22.618582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-01 00:55:22.618599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.618608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-01 00:55:22.618638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.618706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-01 00:55:22.618731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.618743 | orchestrator | 2026-01-01 00:55:22.618753 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-01 00:55:22.618763 | orchestrator | Thursday 01 January 2026 00:50:48 +0000 (0:00:04.686) 0:02:30.702 ****** 2026-01-01 00:55:22.618779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-01 00:55:22.618805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.618840 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.618870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-01 00:55:22.618910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.618952 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.618994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-01 00:55:22.619036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.619075 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.619101 | orchestrator | 2026-01-01 00:55:22.619126 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-01 00:55:22.619152 | orchestrator | Thursday 01 January 2026 00:50:51 +0000 (0:00:03.551) 0:02:34.254 ****** 2026-01-01 00:55:22.619178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:55:22.619205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:55:22.619231 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.619256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:55:22.619294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:55:22.619319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:55:22.619346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-01 00:55:22.619386 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.619409 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.619433 | orchestrator | 2026-01-01 00:55:22.619457 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-01 00:55:22.619481 | orchestrator | Thursday 01 January 2026 00:50:56 +0000 (0:00:04.493) 0:02:38.748 ****** 2026-01-01 00:55:22.619505 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.619528 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.619551 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.619574 | orchestrator | 2026-01-01 00:55:22.619596 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-01 00:55:22.619621 | orchestrator | Thursday 01 January 2026 00:50:58 +0000 (0:00:01.572) 0:02:40.321 ****** 2026-01-01 00:55:22.619670 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.619695 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.619718 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.619739 | orchestrator | 2026-01-01 00:55:22.619762 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-01 00:55:22.619805 | orchestrator | Thursday 01 January 2026 00:51:00 +0000 (0:00:02.418) 0:02:42.739 ****** 2026-01-01 00:55:22.619826 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.619846 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.619854 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.619864 | orchestrator | 2026-01-01 00:55:22.619873 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-01 00:55:22.619882 | orchestrator | Thursday 01 January 2026 00:51:01 +0000 (0:00:00.579) 0:02:43.319 ****** 2026-01-01 00:55:22.619892 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.619902 | orchestrator | 2026-01-01 00:55:22.619911 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-01 00:55:22.619920 | orchestrator | Thursday 01 January 2026 00:51:01 +0000 (0:00:00.920) 0:02:44.239 ****** 2026-01-01 00:55:22.619931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 00:55:22.619944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 00:55:22.619963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 00:55:22.619985 | orchestrator | 2026-01-01 00:55:22.619995 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-01 00:55:22.620005 | orchestrator | Thursday 01 January 2026 00:51:05 +0000 (0:00:03.886) 0:02:48.125 ****** 2026-01-01 00:55:22.620017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 00:55:22.620036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 00:55:22.620044 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620050 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 00:55:22.620063 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620069 | orchestrator | 2026-01-01 00:55:22.620075 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-01 00:55:22.620081 | orchestrator | Thursday 01 January 2026 00:51:06 +0000 (0:00:00.728) 0:02:48.854 ****** 2026-01-01 00:55:22.620088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-01 00:55:22.620095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-01 00:55:22.620102 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-01 00:55:22.620119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-01 00:55:22.620125 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-01 00:55:22.620141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-01 00:55:22.620148 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620154 | orchestrator | 2026-01-01 00:55:22.620160 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-01 00:55:22.620166 | orchestrator | Thursday 01 January 2026 00:51:07 +0000 (0:00:00.750) 0:02:49.605 ****** 2026-01-01 00:55:22.620172 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.620178 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.620184 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.620190 | orchestrator | 2026-01-01 00:55:22.620196 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-01 00:55:22.620202 | orchestrator | Thursday 01 January 2026 00:51:08 +0000 (0:00:01.545) 0:02:51.151 ****** 2026-01-01 00:55:22.620208 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.620214 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.620220 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.620226 | orchestrator | 2026-01-01 00:55:22.620232 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-01 00:55:22.620239 | orchestrator | Thursday 01 January 2026 00:51:11 +0000 (0:00:02.655) 0:02:53.806 ****** 2026-01-01 00:55:22.620245 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620251 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620257 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620263 | orchestrator | 2026-01-01 00:55:22.620269 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-01 00:55:22.620275 | orchestrator | Thursday 01 January 2026 00:51:12 +0000 (0:00:00.587) 0:02:54.394 ****** 2026-01-01 00:55:22.620281 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.620287 | orchestrator | 2026-01-01 00:55:22.620293 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-01 00:55:22.620299 | orchestrator | Thursday 01 January 2026 00:51:13 +0000 (0:00:01.074) 0:02:55.468 ****** 2026-01-01 00:55:22.620311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 00:55:22.620328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 00:55:22.620343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 00:55:22.620354 | orchestrator | 2026-01-01 00:55:22.620360 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-01 00:55:22.620370 | orchestrator | Thursday 01 January 2026 00:51:17 +0000 (0:00:04.801) 0:03:00.269 ****** 2026-01-01 00:55:22.620381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 00:55:22.620389 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 00:55:22.620411 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 00:55:22.620430 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620440 | orchestrator | 2026-01-01 00:55:22.620446 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-01 00:55:22.620452 | orchestrator | Thursday 01 January 2026 00:51:19 +0000 (0:00:01.213) 0:03:01.483 ****** 2026-01-01 00:55:22.620460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-01 00:55:22.620468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:55:22.620476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-01 00:55:22.620483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:55:22.620496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-01 00:55:22.620503 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-01 00:55:22.620515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:55:22.620522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-01 00:55:22.620528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-01 00:55:22.620535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:55:22.620541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:55:22.620555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-01 00:55:22.620562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-01 00:55:22.620568 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-01 00:55:22.620580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-01 00:55:22.620587 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620593 | orchestrator | 2026-01-01 00:55:22.620599 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-01 00:55:22.620605 | orchestrator | Thursday 01 January 2026 00:51:20 +0000 (0:00:01.316) 0:03:02.799 ****** 2026-01-01 00:55:22.620611 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.620617 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.620623 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.620630 | orchestrator | 2026-01-01 00:55:22.620636 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-01 00:55:22.620669 | orchestrator | Thursday 01 January 2026 00:51:21 +0000 (0:00:01.409) 0:03:04.209 ****** 2026-01-01 00:55:22.620676 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.620682 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.620688 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.620694 | orchestrator | 2026-01-01 00:55:22.620700 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-01 00:55:22.620707 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:02.076) 0:03:06.285 ****** 2026-01-01 00:55:22.620713 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620719 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620725 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620731 | orchestrator | 2026-01-01 00:55:22.620740 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-01 00:55:22.620747 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:00.314) 0:03:06.600 ****** 2026-01-01 00:55:22.620753 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620759 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.620765 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.620771 | orchestrator | 2026-01-01 00:55:22.620777 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-01 00:55:22.620784 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:00.625) 0:03:07.226 ****** 2026-01-01 00:55:22.620790 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.620796 | orchestrator | 2026-01-01 00:55:22.620802 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-01 00:55:22.620808 | orchestrator | Thursday 01 January 2026 00:51:26 +0000 (0:00:01.280) 0:03:08.506 ****** 2026-01-01 00:55:22.620815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:55:22.620834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:55:22.620842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:55:22.620849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:55:22.620860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:55:22.620866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:55:22.620878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:55:22.620889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 2026-01-01 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:22.620898 | orchestrator | 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:55:22.620905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:55:22.620911 | orchestrator | 2026-01-01 00:55:22.620917 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-01 00:55:22.620924 | orchestrator | Thursday 01 January 2026 00:51:31 +0000 (0:00:05.010) 0:03:13.517 ****** 2026-01-01 00:55:22.620934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:55:22.620941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:55:22.620960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:55:22.620966 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.620980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:55:22.620987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:55:22.620993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:55:22.621000 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.621010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:55:22.621021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:55:22.621028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:55:22.621034 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.621041 | orchestrator | 2026-01-01 00:55:22.621051 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-01 00:55:22.621057 | orchestrator | Thursday 01 January 2026 00:51:32 +0000 (0:00:01.109) 0:03:14.627 ****** 2026-01-01 00:55:22.621064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-01 00:55:22.621072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-01 00:55:22.621079 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.621085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-01 00:55:22.621092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-01 00:55:22.621098 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.621105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-01 00:55:22.621111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-01 00:55:22.621117 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.621128 | orchestrator | 2026-01-01 00:55:22.621134 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-01 00:55:22.621141 | orchestrator | Thursday 01 January 2026 00:51:33 +0000 (0:00:00.955) 0:03:15.582 ****** 2026-01-01 00:55:22.621147 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.621153 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.621160 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.621166 | orchestrator | 2026-01-01 00:55:22.621172 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-01 00:55:22.621178 | orchestrator | Thursday 01 January 2026 00:51:34 +0000 (0:00:01.390) 0:03:16.973 ****** 2026-01-01 00:55:22.621185 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.621191 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.621197 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.621203 | orchestrator | 2026-01-01 00:55:22.621209 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-01 00:55:22.621216 | orchestrator | Thursday 01 January 2026 00:51:36 +0000 (0:00:02.148) 0:03:19.122 ****** 2026-01-01 00:55:22.621222 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.621228 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.621234 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.621240 | orchestrator | 2026-01-01 00:55:22.621246 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-01 00:55:22.621253 | orchestrator | Thursday 01 January 2026 00:51:37 +0000 (0:00:00.630) 0:03:19.753 ****** 2026-01-01 00:55:22.621259 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.621265 | orchestrator | 2026-01-01 00:55:22.621271 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-01 00:55:22.621277 | orchestrator | Thursday 01 January 2026 00:51:38 +0000 (0:00:01.066) 0:03:20.819 ****** 2026-01-01 00:55:22.621284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-01 00:55:22.621311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.621319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-01 00:55:22.621333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.621340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-01 00:55:22.621347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.621353 | orchestrator | 2026-01-01 00:55:22.621360 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-01 00:55:22.621366 | orchestrator | Thursday 01 January 2026 00:51:43 +0000 (0:00:04.584) 0:03:25.404 ****** 2026-01-01 00:55:22.621377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-01 00:55:22.621858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-01 00:55:22.621890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.621897 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.621904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.621911 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.621917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-01 00:55:22.621924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.621938 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.621944 | orchestrator | 2026-01-01 00:55:22.621956 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-01 00:55:22.621963 | orchestrator | Thursday 01 January 2026 00:51:44 +0000 (0:00:01.282) 0:03:26.687 ****** 2026-01-01 00:55:22.621970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-01 00:55:22.621977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-01 00:55:22.621984 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.621990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-01 00:55:22.621996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-01 00:55:22.622005 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-01 00:55:22.622047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-01 00:55:22.622054 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622060 | orchestrator | 2026-01-01 00:55:22.622065 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-01 00:55:22.622071 | orchestrator | Thursday 01 January 2026 00:51:45 +0000 (0:00:01.117) 0:03:27.805 ****** 2026-01-01 00:55:22.622076 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.622082 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.622087 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.622092 | orchestrator | 2026-01-01 00:55:22.622098 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-01 00:55:22.622104 | orchestrator | Thursday 01 January 2026 00:51:46 +0000 (0:00:01.230) 0:03:29.036 ****** 2026-01-01 00:55:22.622109 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.622114 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.622120 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.622125 | orchestrator | 2026-01-01 00:55:22.622130 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-01 00:55:22.622136 | orchestrator | Thursday 01 January 2026 00:51:49 +0000 (0:00:02.438) 0:03:31.474 ****** 2026-01-01 00:55:22.622141 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.622147 | orchestrator | 2026-01-01 00:55:22.622152 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-01 00:55:22.622157 | orchestrator | Thursday 01 January 2026 00:51:50 +0000 (0:00:01.435) 0:03:32.910 ****** 2026-01-01 00:55:22.622163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-01 00:55:22.622175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-01 00:55:22.622208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-01 00:55:22.622243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622260 | orchestrator | 2026-01-01 00:55:22.622266 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-01 00:55:22.622271 | orchestrator | Thursday 01 January 2026 00:51:54 +0000 (0:00:04.210) 0:03:37.121 ****** 2026-01-01 00:55:22.622281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-01 00:55:22.622286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-01 00:55:22.622310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622316 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.622321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622342 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-01 00:55:22.622360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.622381 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622387 | orchestrator | 2026-01-01 00:55:22.622392 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-01 00:55:22.622398 | orchestrator | Thursday 01 January 2026 00:51:55 +0000 (0:00:00.733) 0:03:37.855 ****** 2026-01-01 00:55:22.622403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-01 00:55:22.622409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-01 00:55:22.622415 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.622420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-01 00:55:22.622426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-01 00:55:22.622431 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-01 00:55:22.622442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-01 00:55:22.622448 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622453 | orchestrator | 2026-01-01 00:55:22.622460 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-01 00:55:22.622467 | orchestrator | Thursday 01 January 2026 00:51:57 +0000 (0:00:01.741) 0:03:39.597 ****** 2026-01-01 00:55:22.622476 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.622483 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.622490 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.622496 | orchestrator | 2026-01-01 00:55:22.622503 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-01 00:55:22.622509 | orchestrator | Thursday 01 January 2026 00:51:58 +0000 (0:00:01.223) 0:03:40.820 ****** 2026-01-01 00:55:22.622515 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.622522 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.622528 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.622534 | orchestrator | 2026-01-01 00:55:22.622541 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-01 00:55:22.622547 | orchestrator | Thursday 01 January 2026 00:52:00 +0000 (0:00:02.057) 0:03:42.878 ****** 2026-01-01 00:55:22.622554 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.622560 | orchestrator | 2026-01-01 00:55:22.622566 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-01 00:55:22.622573 | orchestrator | Thursday 01 January 2026 00:52:01 +0000 (0:00:01.383) 0:03:44.261 ****** 2026-01-01 00:55:22.622579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 00:55:22.622586 | orchestrator | 2026-01-01 00:55:22.622592 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-01 00:55:22.622599 | orchestrator | Thursday 01 January 2026 00:52:04 +0000 (0:00:02.814) 0:03:47.076 ****** 2026-01-01 00:55:22.622613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:55:22.622621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:55:22.622628 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.622657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:55:22.622679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:55:22.622691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:55:22.622706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:55:22.622714 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622721 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622728 | orchestrator | 2026-01-01 00:55:22.622734 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-01 00:55:22.622740 | orchestrator | Thursday 01 January 2026 00:52:06 +0000 (0:00:02.200) 0:03:49.276 ****** 2026-01-01 00:55:22.622750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:55:22.622761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:55:22.622768 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.622778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:55:22.622794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:55:22.622802 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:55:22.622817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-01 00:55:22.622823 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622828 | orchestrator | 2026-01-01 00:55:22.622834 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-01 00:55:22.622839 | orchestrator | Thursday 01 January 2026 00:52:09 +0000 (0:00:02.392) 0:03:51.669 ****** 2026-01-01 00:55:22.622849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:55:22.622860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:55:22.622866 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.622874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:55:22.622880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:55:22.622886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:55:22.622892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-01 00:55:22.622897 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622903 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622908 | orchestrator | 2026-01-01 00:55:22.622914 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-01 00:55:22.622919 | orchestrator | Thursday 01 January 2026 00:52:12 +0000 (0:00:03.003) 0:03:54.672 ****** 2026-01-01 00:55:22.622925 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.622930 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.622935 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.622940 | orchestrator | 2026-01-01 00:55:22.622946 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-01 00:55:22.622951 | orchestrator | Thursday 01 January 2026 00:52:14 +0000 (0:00:02.067) 0:03:56.740 ****** 2026-01-01 00:55:22.622957 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.622966 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.622971 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.622977 | orchestrator | 2026-01-01 00:55:22.622982 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-01 00:55:22.622987 | orchestrator | Thursday 01 January 2026 00:52:16 +0000 (0:00:01.629) 0:03:58.370 ****** 2026-01-01 00:55:22.622995 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.623001 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.623006 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.623012 | orchestrator | 2026-01-01 00:55:22.623017 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-01 00:55:22.623022 | orchestrator | Thursday 01 January 2026 00:52:16 +0000 (0:00:00.326) 0:03:58.696 ****** 2026-01-01 00:55:22.623028 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.623033 | orchestrator | 2026-01-01 00:55:22.623038 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-01 00:55:22.623044 | orchestrator | Thursday 01 January 2026 00:52:17 +0000 (0:00:01.399) 0:04:00.095 ****** 2026-01-01 00:55:22.623052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:55:22.623059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:55:22.623065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-01 00:55:22.623070 | orchestrator | 2026-01-01 00:55:22.623076 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-01 00:55:22.623081 | orchestrator | Thursday 01 January 2026 00:52:19 +0000 (0:00:01.604) 0:04:01.700 ****** 2026-01-01 00:55:22.623087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:55:22.623096 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.623106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:55:22.623111 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.623120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-01 00:55:22.623126 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.623131 | orchestrator | 2026-01-01 00:55:22.623136 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-01 00:55:22.623142 | orchestrator | Thursday 01 January 2026 00:52:19 +0000 (0:00:00.409) 0:04:02.109 ****** 2026-01-01 00:55:22.623147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-01 00:55:22.623153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-01 00:55:22.623159 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.623164 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.623170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-01 00:55:22.623175 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.623181 | orchestrator | 2026-01-01 00:55:22.623186 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-01 00:55:22.623191 | orchestrator | Thursday 01 January 2026 00:52:20 +0000 (0:00:00.934) 0:04:03.044 ****** 2026-01-01 00:55:22.623196 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.623202 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.623207 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.623216 | orchestrator | 2026-01-01 00:55:22.623222 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-01 00:55:22.623227 | orchestrator | Thursday 01 January 2026 00:52:21 +0000 (0:00:00.534) 0:04:03.579 ****** 2026-01-01 00:55:22.623232 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.623237 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.623243 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.623248 | orchestrator | 2026-01-01 00:55:22.623253 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-01 00:55:22.623258 | orchestrator | Thursday 01 January 2026 00:52:22 +0000 (0:00:01.382) 0:04:04.962 ****** 2026-01-01 00:55:22.623264 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.623269 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.623275 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.623280 | orchestrator | 2026-01-01 00:55:22.623285 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-01 00:55:22.623291 | orchestrator | Thursday 01 January 2026 00:52:23 +0000 (0:00:00.350) 0:04:05.312 ****** 2026-01-01 00:55:22.623296 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.623301 | orchestrator | 2026-01-01 00:55:22.623307 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-01 00:55:22.623312 | orchestrator | Thursday 01 January 2026 00:52:24 +0000 (0:00:01.630) 0:04:06.943 ****** 2026-01-01 00:55:22.623320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-01 00:55:22.623330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-01 00:55:22.623358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-01 00:55:22.623376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-01 00:55:22.623394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.623429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-01 00:55:22.623581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-01 00:55:22.623686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.623715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.623772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.623801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.623868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.623897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.623973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.623989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.623998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.624007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624016 | orchestrator | 2026-01-01 00:55:22.624025 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-01 00:55:22.624032 | orchestrator | Thursday 01 January 2026 00:52:29 +0000 (0:00:04.357) 0:04:11.300 ****** 2026-01-01 00:55:22.624096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-01 00:55:22.624113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-01 00:55:22.624157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-01 00:55:22.624239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-01 00:55:22.624350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.624371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-01 00:55:22.624495 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.624501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-01 00:55:22.624577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.624624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624656 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.624665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-01 00:55:22.624718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.624723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-01 00:55:22.624728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-01 00:55:22.624737 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.624742 | orchestrator | 2026-01-01 00:55:22.624747 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-01 00:55:22.624752 | orchestrator | Thursday 01 January 2026 00:52:30 +0000 (0:00:01.340) 0:04:12.641 ****** 2026-01-01 00:55:22.624757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-01 00:55:22.624763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-01 00:55:22.624768 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.624786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-01 00:55:22.624792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-01 00:55:22.624797 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.624802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-01 00:55:22.624807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-01 00:55:22.624812 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.624817 | orchestrator | 2026-01-01 00:55:22.624822 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-01 00:55:22.624826 | orchestrator | Thursday 01 January 2026 00:52:32 +0000 (0:00:01.918) 0:04:14.559 ****** 2026-01-01 00:55:22.624834 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.624839 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.624843 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.624848 | orchestrator | 2026-01-01 00:55:22.624853 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-01 00:55:22.624858 | orchestrator | Thursday 01 January 2026 00:52:33 +0000 (0:00:01.306) 0:04:15.865 ****** 2026-01-01 00:55:22.624863 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.624867 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.624872 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.624877 | orchestrator | 2026-01-01 00:55:22.624882 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-01 00:55:22.624886 | orchestrator | Thursday 01 January 2026 00:52:35 +0000 (0:00:02.280) 0:04:18.145 ****** 2026-01-01 00:55:22.624893 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.624901 | orchestrator | 2026-01-01 00:55:22.624910 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-01 00:55:22.624915 | orchestrator | Thursday 01 January 2026 00:52:37 +0000 (0:00:01.265) 0:04:19.411 ****** 2026-01-01 00:55:22.624924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.624934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.624952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.624958 | orchestrator | 2026-01-01 00:55:22.624963 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-01 00:55:22.624968 | orchestrator | Thursday 01 January 2026 00:52:41 +0000 (0:00:04.276) 0:04:23.687 ****** 2026-01-01 00:55:22.624976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.624981 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.624986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.624995 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.625004 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625009 | orchestrator | 2026-01-01 00:55:22.625014 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-01 00:55:22.625019 | orchestrator | Thursday 01 January 2026 00:52:41 +0000 (0:00:00.556) 0:04:24.244 ****** 2026-01-01 00:55:22.625024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625034 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625062 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625077 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625082 | orchestrator | 2026-01-01 00:55:22.625087 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-01 00:55:22.625096 | orchestrator | Thursday 01 January 2026 00:52:42 +0000 (0:00:00.799) 0:04:25.043 ****** 2026-01-01 00:55:22.625101 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.625105 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.625110 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.625115 | orchestrator | 2026-01-01 00:55:22.625120 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-01 00:55:22.625124 | orchestrator | Thursday 01 January 2026 00:52:44 +0000 (0:00:01.352) 0:04:26.395 ****** 2026-01-01 00:55:22.625129 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.625138 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.625143 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.625148 | orchestrator | 2026-01-01 00:55:22.625153 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-01 00:55:22.625160 | orchestrator | Thursday 01 January 2026 00:52:46 +0000 (0:00:02.465) 0:04:28.861 ****** 2026-01-01 00:55:22.625168 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.625173 | orchestrator | 2026-01-01 00:55:22.625180 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-01 00:55:22.625187 | orchestrator | Thursday 01 January 2026 00:52:48 +0000 (0:00:01.562) 0:04:30.424 ****** 2026-01-01 00:55:22.625193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.625200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.625236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.625263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625299 | orchestrator | 2026-01-01 00:55:22.625305 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-01 00:55:22.625313 | orchestrator | Thursday 01 January 2026 00:52:53 +0000 (0:00:05.637) 0:04:36.061 ****** 2026-01-01 00:55:22.625319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.625324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625335 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.625365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625376 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.625387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.625411 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625416 | orchestrator | 2026-01-01 00:55:22.625421 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-01 00:55:22.625430 | orchestrator | Thursday 01 January 2026 00:52:54 +0000 (0:00:01.071) 0:04:37.133 ****** 2026-01-01 00:55:22.625435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625460 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625484 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-01 00:55:22.625509 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625517 | orchestrator | 2026-01-01 00:55:22.625523 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-01 00:55:22.625528 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:01.348) 0:04:38.481 ****** 2026-01-01 00:55:22.625533 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.625538 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.625543 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.625548 | orchestrator | 2026-01-01 00:55:22.625552 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-01 00:55:22.625557 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:01.483) 0:04:39.965 ****** 2026-01-01 00:55:22.625562 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.625572 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.625576 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.625581 | orchestrator | 2026-01-01 00:55:22.625586 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-01 00:55:22.625591 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:02.032) 0:04:41.997 ****** 2026-01-01 00:55:22.625596 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.625600 | orchestrator | 2026-01-01 00:55:22.625605 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-01 00:55:22.625622 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:01.448) 0:04:43.445 ****** 2026-01-01 00:55:22.625628 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-01 00:55:22.625633 | orchestrator | 2026-01-01 00:55:22.625638 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-01 00:55:22.625659 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.813) 0:04:44.259 ****** 2026-01-01 00:55:22.625665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-01 00:55:22.625673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-01 00:55:22.625678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-01 00:55:22.625684 | orchestrator | 2026-01-01 00:55:22.625688 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-01 00:55:22.625694 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:03.881) 0:04:48.141 ****** 2026-01-01 00:55:22.625698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625703 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625717 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625727 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625732 | orchestrator | 2026-01-01 00:55:22.625737 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-01 00:55:22.625742 | orchestrator | Thursday 01 January 2026 00:53:07 +0000 (0:00:01.580) 0:04:49.721 ****** 2026-01-01 00:55:22.625760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:55:22.625765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:55:22.625771 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:55:22.625782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:55:22.625787 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:55:22.625799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-01 00:55:22.625804 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625809 | orchestrator | 2026-01-01 00:55:22.625814 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-01 00:55:22.625819 | orchestrator | Thursday 01 January 2026 00:53:09 +0000 (0:00:01.612) 0:04:51.334 ****** 2026-01-01 00:55:22.625823 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.625828 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.625833 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.625838 | orchestrator | 2026-01-01 00:55:22.625843 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-01 00:55:22.625847 | orchestrator | Thursday 01 January 2026 00:53:11 +0000 (0:00:02.853) 0:04:54.187 ****** 2026-01-01 00:55:22.625852 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.625857 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.625862 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.625867 | orchestrator | 2026-01-01 00:55:22.625871 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-01 00:55:22.625876 | orchestrator | Thursday 01 January 2026 00:53:15 +0000 (0:00:03.286) 0:04:57.473 ****** 2026-01-01 00:55:22.625881 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-01 00:55:22.625889 | orchestrator | 2026-01-01 00:55:22.625894 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-01 00:55:22.625900 | orchestrator | Thursday 01 January 2026 00:53:16 +0000 (0:00:01.219) 0:04:58.692 ****** 2026-01-01 00:55:22.625905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625909 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625919 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625942 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.625947 | orchestrator | 2026-01-01 00:55:22.625952 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-01 00:55:22.625957 | orchestrator | Thursday 01 January 2026 00:53:17 +0000 (0:00:01.127) 0:04:59.820 ****** 2026-01-01 00:55:22.625962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625967 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.625988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.625994 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.625998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-01 00:55:22.626008 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626035 | orchestrator | 2026-01-01 00:55:22.626044 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-01 00:55:22.626053 | orchestrator | Thursday 01 January 2026 00:53:18 +0000 (0:00:01.126) 0:05:00.946 ****** 2026-01-01 00:55:22.626061 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.626068 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.626077 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626084 | orchestrator | 2026-01-01 00:55:22.626089 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-01 00:55:22.626094 | orchestrator | Thursday 01 January 2026 00:53:20 +0000 (0:00:01.541) 0:05:02.488 ****** 2026-01-01 00:55:22.626099 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.626104 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.626109 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.626113 | orchestrator | 2026-01-01 00:55:22.626118 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-01 00:55:22.626123 | orchestrator | Thursday 01 January 2026 00:53:22 +0000 (0:00:02.473) 0:05:04.961 ****** 2026-01-01 00:55:22.626128 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.626132 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.626137 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.626142 | orchestrator | 2026-01-01 00:55:22.626147 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-01 00:55:22.626152 | orchestrator | Thursday 01 January 2026 00:53:25 +0000 (0:00:03.313) 0:05:08.274 ****** 2026-01-01 00:55:22.626157 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-01 00:55:22.626161 | orchestrator | 2026-01-01 00:55:22.626166 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-01 00:55:22.626171 | orchestrator | Thursday 01 January 2026 00:53:26 +0000 (0:00:00.867) 0:05:09.142 ****** 2026-01-01 00:55:22.626176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:55:22.626181 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.626204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:55:22.626209 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.626214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:55:22.626224 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626229 | orchestrator | 2026-01-01 00:55:22.626234 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-01 00:55:22.626242 | orchestrator | Thursday 01 January 2026 00:53:28 +0000 (0:00:01.552) 0:05:10.695 ****** 2026-01-01 00:55:22.626247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:55:22.626252 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.626257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:55:22.626262 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.626267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-01 00:55:22.626272 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626277 | orchestrator | 2026-01-01 00:55:22.626282 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-01 00:55:22.626287 | orchestrator | Thursday 01 January 2026 00:53:29 +0000 (0:00:01.467) 0:05:12.163 ****** 2026-01-01 00:55:22.626291 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.626296 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.626301 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626306 | orchestrator | 2026-01-01 00:55:22.626310 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-01 00:55:22.626315 | orchestrator | Thursday 01 January 2026 00:53:31 +0000 (0:00:01.823) 0:05:13.986 ****** 2026-01-01 00:55:22.626320 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.626325 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.626330 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.626334 | orchestrator | 2026-01-01 00:55:22.626339 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-01 00:55:22.626344 | orchestrator | Thursday 01 January 2026 00:53:34 +0000 (0:00:02.570) 0:05:16.557 ****** 2026-01-01 00:55:22.626349 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.626354 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.626358 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.626363 | orchestrator | 2026-01-01 00:55:22.626368 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-01 00:55:22.626373 | orchestrator | Thursday 01 January 2026 00:53:37 +0000 (0:00:03.536) 0:05:20.094 ****** 2026-01-01 00:55:22.626377 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.626382 | orchestrator | 2026-01-01 00:55:22.626387 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-01 00:55:22.626395 | orchestrator | Thursday 01 January 2026 00:53:39 +0000 (0:00:01.647) 0:05:21.741 ****** 2026-01-01 00:55:22.626413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.626432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:55:22.626438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.626454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.626475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:55:22.626484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.626495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.626505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:55:22.626524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.626543 | orchestrator | 2026-01-01 00:55:22.626548 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-01 00:55:22.626553 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:03.731) 0:05:25.473 ****** 2026-01-01 00:55:22.626558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.626563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:55:22.626568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.626599 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.626606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.626612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:55:22.626617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.626692 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.626699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.626708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-01 00:55:22.626713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-01 00:55:22.626723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-01 00:55:22.626732 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626737 | orchestrator | 2026-01-01 00:55:22.626742 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-01 00:55:22.626747 | orchestrator | Thursday 01 January 2026 00:53:43 +0000 (0:00:00.795) 0:05:26.268 ****** 2026-01-01 00:55:22.626752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:55:22.626757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:55:22.626762 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.626778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:55:22.626784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:55:22.626788 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.626793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:55:22.626798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-01 00:55:22.626803 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.626808 | orchestrator | 2026-01-01 00:55:22.626813 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-01 00:55:22.626818 | orchestrator | Thursday 01 January 2026 00:53:45 +0000 (0:00:01.634) 0:05:27.903 ****** 2026-01-01 00:55:22.626827 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.626832 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.626837 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.626842 | orchestrator | 2026-01-01 00:55:22.626859 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-01 00:55:22.626868 | orchestrator | Thursday 01 January 2026 00:53:47 +0000 (0:00:01.598) 0:05:29.501 ****** 2026-01-01 00:55:22.626877 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.626892 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.626900 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.626907 | orchestrator | 2026-01-01 00:55:22.626915 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-01 00:55:22.626922 | orchestrator | Thursday 01 January 2026 00:53:49 +0000 (0:00:02.321) 0:05:31.823 ****** 2026-01-01 00:55:22.626930 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.626938 | orchestrator | 2026-01-01 00:55:22.626945 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-01 00:55:22.626954 | orchestrator | Thursday 01 January 2026 00:53:50 +0000 (0:00:01.432) 0:05:33.256 ****** 2026-01-01 00:55:22.626962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:55:22.626978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:55:22.627006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:55:22.627022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:55:22.627029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:55:22.627039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:55:22.627045 | orchestrator | 2026-01-01 00:55:22.627050 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-01 00:55:22.627055 | orchestrator | Thursday 01 January 2026 00:53:56 +0000 (0:00:05.935) 0:05:39.192 ****** 2026-01-01 00:55:22.627073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:55:22.627082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:55:22.627087 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:55:22.627102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:55:22.627107 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:55:22.627133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:55:22.627138 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627143 | orchestrator | 2026-01-01 00:55:22.627148 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-01 00:55:22.627156 | orchestrator | Thursday 01 January 2026 00:53:57 +0000 (0:00:00.716) 0:05:39.908 ****** 2026-01-01 00:55:22.627164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-01 00:55:22.627170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-01 00:55:22.627175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-01 00:55:22.627181 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-01 00:55:22.627190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-01 00:55:22.627195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-01 00:55:22.627203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-01 00:55:22.627208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-01 00:55:22.627213 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-01 00:55:22.627222 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627227 | orchestrator | 2026-01-01 00:55:22.627231 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-01 00:55:22.627236 | orchestrator | Thursday 01 January 2026 00:53:58 +0000 (0:00:01.260) 0:05:41.169 ****** 2026-01-01 00:55:22.627240 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627245 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627249 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627254 | orchestrator | 2026-01-01 00:55:22.627258 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-01 00:55:22.627263 | orchestrator | Thursday 01 January 2026 00:53:59 +0000 (0:00:00.915) 0:05:42.085 ****** 2026-01-01 00:55:22.627268 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627272 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627277 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627281 | orchestrator | 2026-01-01 00:55:22.627296 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-01 00:55:22.627302 | orchestrator | Thursday 01 January 2026 00:54:01 +0000 (0:00:01.290) 0:05:43.375 ****** 2026-01-01 00:55:22.627306 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.627311 | orchestrator | 2026-01-01 00:55:22.627315 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-01 00:55:22.627320 | orchestrator | Thursday 01 January 2026 00:54:02 +0000 (0:00:01.584) 0:05:44.959 ****** 2026-01-01 00:55:22.627331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 00:55:22.627336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:55:22.627342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 00:55:22.627351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:55:22.627373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 00:55:22.627399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:55:22.627408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 00:55:22.627437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-01 00:55:22.627443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 00:55:22.627488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-01 00:55:22.627493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 00:55:22.627520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-01 00:55:22.627525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627540 | orchestrator | 2026-01-01 00:55:22.627545 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-01 00:55:22.627550 | orchestrator | Thursday 01 January 2026 00:54:07 +0000 (0:00:04.521) 0:05:49.481 ****** 2026-01-01 00:55:22.627593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-01 00:55:22.627609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:55:22.627618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-01 00:55:22.627657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-01 00:55:22.627665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627689 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-01 00:55:22.627702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:55:22.627707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-01 00:55:22.627741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-01 00:55:22.627746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627760 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-01 00:55:22.627776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 00:55:22.627781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-01 00:55:22.627803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-01 00:55:22.627812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 00:55:22.627837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 00:55:22.627842 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627847 | orchestrator | 2026-01-01 00:55:22.627851 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-01 00:55:22.627856 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:01.339) 0:05:50.821 ****** 2026-01-01 00:55:22.627864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-01 00:55:22.627869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-01 00:55:22.627875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-01 00:55:22.627880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-01 00:55:22.627886 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-01 00:55:22.627895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-01 00:55:22.627904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-01 00:55:22.627909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-01 00:55:22.627914 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-01 00:55:22.627923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-01 00:55:22.627928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-01 00:55:22.627935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-01 00:55:22.627940 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627944 | orchestrator | 2026-01-01 00:55:22.627949 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-01 00:55:22.627954 | orchestrator | Thursday 01 January 2026 00:54:09 +0000 (0:00:01.186) 0:05:52.008 ****** 2026-01-01 00:55:22.627959 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627963 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627968 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.627972 | orchestrator | 2026-01-01 00:55:22.627977 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-01 00:55:22.627982 | orchestrator | Thursday 01 January 2026 00:54:10 +0000 (0:00:00.456) 0:05:52.464 ****** 2026-01-01 00:55:22.627986 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.627991 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.627995 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628000 | orchestrator | 2026-01-01 00:55:22.628004 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-01 00:55:22.628009 | orchestrator | Thursday 01 January 2026 00:54:11 +0000 (0:00:01.543) 0:05:54.008 ****** 2026-01-01 00:55:22.628013 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.628018 | orchestrator | 2026-01-01 00:55:22.628022 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-01 00:55:22.628042 | orchestrator | Thursday 01 January 2026 00:54:13 +0000 (0:00:01.821) 0:05:55.829 ****** 2026-01-01 00:55:22.628047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:55:22.628057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:55:22.628062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-01 00:55:22.628067 | orchestrator | 2026-01-01 00:55:22.628075 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-01 00:55:22.628079 | orchestrator | Thursday 01 January 2026 00:54:15 +0000 (0:00:02.346) 0:05:58.176 ****** 2026-01-01 00:55:22.628088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:55:22.628093 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:55:22.628107 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-01 00:55:22.628117 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628122 | orchestrator | 2026-01-01 00:55:22.628127 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-01 00:55:22.628131 | orchestrator | Thursday 01 January 2026 00:54:16 +0000 (0:00:00.488) 0:05:58.664 ****** 2026-01-01 00:55:22.628136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-01 00:55:22.628141 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-01 00:55:22.628150 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-01 00:55:22.628159 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628164 | orchestrator | 2026-01-01 00:55:22.628168 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-01 00:55:22.628173 | orchestrator | Thursday 01 January 2026 00:54:17 +0000 (0:00:00.914) 0:05:59.579 ****** 2026-01-01 00:55:22.628180 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628184 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628189 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628193 | orchestrator | 2026-01-01 00:55:22.628198 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-01 00:55:22.628203 | orchestrator | Thursday 01 January 2026 00:54:17 +0000 (0:00:00.428) 0:06:00.008 ****** 2026-01-01 00:55:22.628207 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628212 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628216 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628221 | orchestrator | 2026-01-01 00:55:22.628225 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-01 00:55:22.628233 | orchestrator | Thursday 01 January 2026 00:54:18 +0000 (0:00:01.223) 0:06:01.231 ****** 2026-01-01 00:55:22.628238 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:55:22.628242 | orchestrator | 2026-01-01 00:55:22.628247 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-01 00:55:22.628251 | orchestrator | Thursday 01 January 2026 00:54:20 +0000 (0:00:01.645) 0:06:02.877 ****** 2026-01-01 00:55:22.628259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.628265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.628270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.628277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.628288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.628293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-01 00:55:22.628298 | orchestrator | 2026-01-01 00:55:22.628302 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-01 00:55:22.628307 | orchestrator | Thursday 01 January 2026 00:54:26 +0000 (0:00:05.714) 0:06:08.592 ****** 2026-01-01 00:55:22.628312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.628319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.628327 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.628340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.628345 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.628354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-01 00:55:22.628362 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628367 | orchestrator | 2026-01-01 00:55:22.628371 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-01 00:55:22.628378 | orchestrator | Thursday 01 January 2026 00:54:26 +0000 (0:00:00.666) 0:06:09.258 ****** 2026-01-01 00:55:22.628383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628402 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628428 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-01 00:55:22.628451 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628456 | orchestrator | 2026-01-01 00:55:22.628460 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-01 00:55:22.628465 | orchestrator | Thursday 01 January 2026 00:54:28 +0000 (0:00:01.862) 0:06:11.120 ****** 2026-01-01 00:55:22.628469 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.628474 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.628478 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.628483 | orchestrator | 2026-01-01 00:55:22.628487 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-01 00:55:22.628495 | orchestrator | Thursday 01 January 2026 00:54:30 +0000 (0:00:01.458) 0:06:12.579 ****** 2026-01-01 00:55:22.628500 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.628505 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.628509 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.628514 | orchestrator | 2026-01-01 00:55:22.628518 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-01 00:55:22.628523 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:02.287) 0:06:14.866 ****** 2026-01-01 00:55:22.628527 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628532 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628536 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628541 | orchestrator | 2026-01-01 00:55:22.628545 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-01 00:55:22.628550 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:00.352) 0:06:15.219 ****** 2026-01-01 00:55:22.628555 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628559 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628563 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628568 | orchestrator | 2026-01-01 00:55:22.628573 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-01 00:55:22.628579 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:00.340) 0:06:15.560 ****** 2026-01-01 00:55:22.628584 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628588 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628593 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628598 | orchestrator | 2026-01-01 00:55:22.628602 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-01 00:55:22.628607 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:00.683) 0:06:16.243 ****** 2026-01-01 00:55:22.628611 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628616 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628620 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628625 | orchestrator | 2026-01-01 00:55:22.628629 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-01 00:55:22.628634 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.376) 0:06:16.619 ****** 2026-01-01 00:55:22.628639 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628660 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628665 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628669 | orchestrator | 2026-01-01 00:55:22.628674 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-01 00:55:22.628678 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.358) 0:06:16.978 ****** 2026-01-01 00:55:22.628683 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628687 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628692 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628696 | orchestrator | 2026-01-01 00:55:22.628704 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-01 00:55:22.628708 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.908) 0:06:17.886 ****** 2026-01-01 00:55:22.628713 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628717 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628722 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628726 | orchestrator | 2026-01-01 00:55:22.628731 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-01 00:55:22.628735 | orchestrator | Thursday 01 January 2026 00:54:36 +0000 (0:00:00.746) 0:06:18.633 ****** 2026-01-01 00:55:22.628740 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628744 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628749 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628753 | orchestrator | 2026-01-01 00:55:22.628758 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-01 00:55:22.628766 | orchestrator | Thursday 01 January 2026 00:54:36 +0000 (0:00:00.355) 0:06:18.988 ****** 2026-01-01 00:55:22.628770 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628775 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628779 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628784 | orchestrator | 2026-01-01 00:55:22.628788 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-01 00:55:22.628793 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:01.022) 0:06:20.010 ****** 2026-01-01 00:55:22.628797 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628802 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628806 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628811 | orchestrator | 2026-01-01 00:55:22.628815 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-01 00:55:22.628820 | orchestrator | Thursday 01 January 2026 00:54:39 +0000 (0:00:01.382) 0:06:21.393 ****** 2026-01-01 00:55:22.628824 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628829 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628833 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628838 | orchestrator | 2026-01-01 00:55:22.628842 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-01 00:55:22.628847 | orchestrator | Thursday 01 January 2026 00:54:40 +0000 (0:00:01.070) 0:06:22.463 ****** 2026-01-01 00:55:22.628851 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.628856 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.628860 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.628865 | orchestrator | 2026-01-01 00:55:22.628869 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-01 00:55:22.628874 | orchestrator | Thursday 01 January 2026 00:54:50 +0000 (0:00:09.920) 0:06:32.384 ****** 2026-01-01 00:55:22.628878 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628883 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628887 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628892 | orchestrator | 2026-01-01 00:55:22.628896 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-01 00:55:22.628901 | orchestrator | Thursday 01 January 2026 00:54:50 +0000 (0:00:00.831) 0:06:33.216 ****** 2026-01-01 00:55:22.628905 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.628910 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.628914 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.628919 | orchestrator | 2026-01-01 00:55:22.628923 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-01 00:55:22.628928 | orchestrator | Thursday 01 January 2026 00:55:05 +0000 (0:00:14.473) 0:06:47.689 ****** 2026-01-01 00:55:22.628932 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.628937 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.628941 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.628945 | orchestrator | 2026-01-01 00:55:22.628950 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-01 00:55:22.628954 | orchestrator | Thursday 01 January 2026 00:55:06 +0000 (0:00:01.188) 0:06:48.878 ****** 2026-01-01 00:55:22.628959 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:55:22.628963 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:55:22.628968 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:55:22.628972 | orchestrator | 2026-01-01 00:55:22.628977 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-01 00:55:22.628981 | orchestrator | Thursday 01 January 2026 00:55:11 +0000 (0:00:04.796) 0:06:53.675 ****** 2026-01-01 00:55:22.628986 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.628990 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.628995 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.628999 | orchestrator | 2026-01-01 00:55:22.629004 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-01 00:55:22.629008 | orchestrator | Thursday 01 January 2026 00:55:11 +0000 (0:00:00.388) 0:06:54.064 ****** 2026-01-01 00:55:22.629013 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.629023 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.629027 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.629032 | orchestrator | 2026-01-01 00:55:22.629037 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-01 00:55:22.629041 | orchestrator | Thursday 01 January 2026 00:55:12 +0000 (0:00:00.390) 0:06:54.455 ****** 2026-01-01 00:55:22.629046 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.629050 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.629055 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.629059 | orchestrator | 2026-01-01 00:55:22.629064 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-01 00:55:22.629069 | orchestrator | Thursday 01 January 2026 00:55:12 +0000 (0:00:00.740) 0:06:55.195 ****** 2026-01-01 00:55:22.629073 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.629078 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.629082 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.629087 | orchestrator | 2026-01-01 00:55:22.629091 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-01 00:55:22.629096 | orchestrator | Thursday 01 January 2026 00:55:13 +0000 (0:00:00.387) 0:06:55.583 ****** 2026-01-01 00:55:22.629101 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.629105 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.629110 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.629114 | orchestrator | 2026-01-01 00:55:22.629119 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-01 00:55:22.629126 | orchestrator | Thursday 01 January 2026 00:55:13 +0000 (0:00:00.385) 0:06:55.968 ****** 2026-01-01 00:55:22.629130 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:55:22.629135 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:55:22.629139 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:55:22.629144 | orchestrator | 2026-01-01 00:55:22.629149 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-01 00:55:22.629153 | orchestrator | Thursday 01 January 2026 00:55:14 +0000 (0:00:00.366) 0:06:56.335 ****** 2026-01-01 00:55:22.629158 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.629162 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.629167 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.629171 | orchestrator | 2026-01-01 00:55:22.629176 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-01 00:55:22.629181 | orchestrator | Thursday 01 January 2026 00:55:19 +0000 (0:00:05.239) 0:07:01.575 ****** 2026-01-01 00:55:22.629185 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:55:22.629190 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:55:22.629194 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:55:22.629199 | orchestrator | 2026-01-01 00:55:22.629203 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:55:22.629208 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-01 00:55:22.629213 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-01 00:55:22.629218 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-01 00:55:22.629222 | orchestrator | 2026-01-01 00:55:22.629227 | orchestrator | 2026-01-01 00:55:22.629231 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:55:22.629236 | orchestrator | Thursday 01 January 2026 00:55:20 +0000 (0:00:00.929) 0:07:02.505 ****** 2026-01-01 00:55:22.629240 | orchestrator | =============================================================================== 2026-01-01 00:55:22.629245 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.47s 2026-01-01 00:55:22.629249 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.92s 2026-01-01 00:55:22.629267 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.38s 2026-01-01 00:55:22.629273 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.94s 2026-01-01 00:55:22.629277 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.71s 2026-01-01 00:55:22.629282 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.64s 2026-01-01 00:55:22.629287 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.42s 2026-01-01 00:55:22.629291 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.24s 2026-01-01 00:55:22.629296 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.01s 2026-01-01 00:55:22.629300 | orchestrator | loadbalancer : Remove mariadb.cfg if proxysql enabled ------------------- 4.96s 2026-01-01 00:55:22.629305 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.80s 2026-01-01 00:55:22.629309 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.80s 2026-01-01 00:55:22.629314 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.69s 2026-01-01 00:55:22.629318 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.59s 2026-01-01 00:55:22.629323 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.52s 2026-01-01 00:55:22.629327 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.50s 2026-01-01 00:55:22.629332 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.49s 2026-01-01 00:55:22.629336 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 4.42s 2026-01-01 00:55:22.629341 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.40s 2026-01-01 00:55:22.629346 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.36s 2026-01-01 00:55:25.642862 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:25.644014 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:25.644889 | orchestrator | 2026-01-01 00:55:25 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:25.645277 | orchestrator | 2026-01-01 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:28.680067 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:28.684068 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:28.686738 | orchestrator | 2026-01-01 00:55:28 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:28.687913 | orchestrator | 2026-01-01 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:31.721501 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:31.721947 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:31.725842 | orchestrator | 2026-01-01 00:55:31 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:31.725861 | orchestrator | 2026-01-01 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:34.761290 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:34.762699 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:34.764417 | orchestrator | 2026-01-01 00:55:34 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:34.764490 | orchestrator | 2026-01-01 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:37.795984 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:37.796202 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:37.797202 | orchestrator | 2026-01-01 00:55:37 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:37.797316 | orchestrator | 2026-01-01 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:40.837160 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:40.839565 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:40.839597 | orchestrator | 2026-01-01 00:55:40 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:40.839607 | orchestrator | 2026-01-01 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:43.873854 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:43.875761 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:43.879116 | orchestrator | 2026-01-01 00:55:43 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:43.879137 | orchestrator | 2026-01-01 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:46.927945 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:46.933904 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:46.935946 | orchestrator | 2026-01-01 00:55:46 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:46.935989 | orchestrator | 2026-01-01 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:49.969447 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:49.970330 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:49.970859 | orchestrator | 2026-01-01 00:55:49 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:49.970893 | orchestrator | 2026-01-01 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:53.027491 | orchestrator | 2026-01-01 00:55:53 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:53.030212 | orchestrator | 2026-01-01 00:55:53 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:53.033730 | orchestrator | 2026-01-01 00:55:53 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:53.033774 | orchestrator | 2026-01-01 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:56.094882 | orchestrator | 2026-01-01 00:55:56 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:56.096115 | orchestrator | 2026-01-01 00:55:56 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:56.097570 | orchestrator | 2026-01-01 00:55:56 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:56.097692 | orchestrator | 2026-01-01 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:55:59.140063 | orchestrator | 2026-01-01 00:55:59 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:55:59.141734 | orchestrator | 2026-01-01 00:55:59 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:55:59.148057 | orchestrator | 2026-01-01 00:55:59 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:55:59.148128 | orchestrator | 2026-01-01 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:02.197945 | orchestrator | 2026-01-01 00:56:02 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:02.204078 | orchestrator | 2026-01-01 00:56:02 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:02.204138 | orchestrator | 2026-01-01 00:56:02 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:02.204152 | orchestrator | 2026-01-01 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:05.256549 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:05.257606 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:05.259750 | orchestrator | 2026-01-01 00:56:05 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:05.259814 | orchestrator | 2026-01-01 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:08.304126 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:08.306553 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:08.308834 | orchestrator | 2026-01-01 00:56:08 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:08.308887 | orchestrator | 2026-01-01 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:11.357744 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:11.357895 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:11.360026 | orchestrator | 2026-01-01 00:56:11 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:11.360102 | orchestrator | 2026-01-01 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:14.414463 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:14.415653 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:14.415840 | orchestrator | 2026-01-01 00:56:14 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:14.416273 | orchestrator | 2026-01-01 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:17.457659 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:17.458881 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:17.460876 | orchestrator | 2026-01-01 00:56:17 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:17.461139 | orchestrator | 2026-01-01 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:20.506666 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:20.510549 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:20.513184 | orchestrator | 2026-01-01 00:56:20 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:20.513808 | orchestrator | 2026-01-01 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:23.551388 | orchestrator | 2026-01-01 00:56:23 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:23.555221 | orchestrator | 2026-01-01 00:56:23 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:23.557779 | orchestrator | 2026-01-01 00:56:23 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:23.558072 | orchestrator | 2026-01-01 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:26.616872 | orchestrator | 2026-01-01 00:56:26 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:26.618486 | orchestrator | 2026-01-01 00:56:26 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:26.620279 | orchestrator | 2026-01-01 00:56:26 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:26.620756 | orchestrator | 2026-01-01 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:29.671665 | orchestrator | 2026-01-01 00:56:29 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:29.672754 | orchestrator | 2026-01-01 00:56:29 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:29.674580 | orchestrator | 2026-01-01 00:56:29 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:29.674656 | orchestrator | 2026-01-01 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:32.722309 | orchestrator | 2026-01-01 00:56:32 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:32.725870 | orchestrator | 2026-01-01 00:56:32 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:32.730926 | orchestrator | 2026-01-01 00:56:32 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:32.730962 | orchestrator | 2026-01-01 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:35.786002 | orchestrator | 2026-01-01 00:56:35 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:35.787649 | orchestrator | 2026-01-01 00:56:35 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:35.789547 | orchestrator | 2026-01-01 00:56:35 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:35.789687 | orchestrator | 2026-01-01 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:38.840201 | orchestrator | 2026-01-01 00:56:38 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:38.842852 | orchestrator | 2026-01-01 00:56:38 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:38.844757 | orchestrator | 2026-01-01 00:56:38 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:38.844794 | orchestrator | 2026-01-01 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:41.901726 | orchestrator | 2026-01-01 00:56:41 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:41.904113 | orchestrator | 2026-01-01 00:56:41 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:41.906327 | orchestrator | 2026-01-01 00:56:41 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:41.906687 | orchestrator | 2026-01-01 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:44.955302 | orchestrator | 2026-01-01 00:56:44 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:44.955777 | orchestrator | 2026-01-01 00:56:44 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:44.956857 | orchestrator | 2026-01-01 00:56:44 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:44.956965 | orchestrator | 2026-01-01 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:48.006260 | orchestrator | 2026-01-01 00:56:48 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:48.006806 | orchestrator | 2026-01-01 00:56:48 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:48.006891 | orchestrator | 2026-01-01 00:56:48 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:48.006902 | orchestrator | 2026-01-01 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:51.048583 | orchestrator | 2026-01-01 00:56:51 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:51.051211 | orchestrator | 2026-01-01 00:56:51 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:51.052681 | orchestrator | 2026-01-01 00:56:51 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:51.052719 | orchestrator | 2026-01-01 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:54.102894 | orchestrator | 2026-01-01 00:56:54 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:54.104305 | orchestrator | 2026-01-01 00:56:54 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:54.106444 | orchestrator | 2026-01-01 00:56:54 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:54.106501 | orchestrator | 2026-01-01 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:56:57.148851 | orchestrator | 2026-01-01 00:56:57 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:56:57.151642 | orchestrator | 2026-01-01 00:56:57 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:56:57.154127 | orchestrator | 2026-01-01 00:56:57 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:56:57.154163 | orchestrator | 2026-01-01 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:00.199328 | orchestrator | 2026-01-01 00:57:00 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:00.203027 | orchestrator | 2026-01-01 00:57:00 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:00.206697 | orchestrator | 2026-01-01 00:57:00 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:00.207342 | orchestrator | 2026-01-01 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:03.257384 | orchestrator | 2026-01-01 00:57:03 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:03.259452 | orchestrator | 2026-01-01 00:57:03 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:03.263152 | orchestrator | 2026-01-01 00:57:03 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:03.263241 | orchestrator | 2026-01-01 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:06.310772 | orchestrator | 2026-01-01 00:57:06 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:06.312261 | orchestrator | 2026-01-01 00:57:06 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:06.314160 | orchestrator | 2026-01-01 00:57:06 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:06.314185 | orchestrator | 2026-01-01 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:09.357830 | orchestrator | 2026-01-01 00:57:09 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:09.359529 | orchestrator | 2026-01-01 00:57:09 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:09.361779 | orchestrator | 2026-01-01 00:57:09 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:09.361871 | orchestrator | 2026-01-01 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:12.411088 | orchestrator | 2026-01-01 00:57:12 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:12.413873 | orchestrator | 2026-01-01 00:57:12 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:12.416498 | orchestrator | 2026-01-01 00:57:12 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:12.416860 | orchestrator | 2026-01-01 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:15.469143 | orchestrator | 2026-01-01 00:57:15 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:15.472556 | orchestrator | 2026-01-01 00:57:15 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:15.473905 | orchestrator | 2026-01-01 00:57:15 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:15.473954 | orchestrator | 2026-01-01 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:18.524020 | orchestrator | 2026-01-01 00:57:18 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:18.527108 | orchestrator | 2026-01-01 00:57:18 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:18.528550 | orchestrator | 2026-01-01 00:57:18 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:18.528581 | orchestrator | 2026-01-01 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:21.580248 | orchestrator | 2026-01-01 00:57:21 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:21.581963 | orchestrator | 2026-01-01 00:57:21 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:21.584710 | orchestrator | 2026-01-01 00:57:21 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:21.584763 | orchestrator | 2026-01-01 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:24.629143 | orchestrator | 2026-01-01 00:57:24 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state STARTED 2026-01-01 00:57:24.630182 | orchestrator | 2026-01-01 00:57:24 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:24.632401 | orchestrator | 2026-01-01 00:57:24 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:24.632456 | orchestrator | 2026-01-01 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:27.686254 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task fbabd7d5-058e-4576-924e-c3bb1d29bfcc is in state SUCCESS 2026-01-01 00:57:27.688697 | orchestrator | 2026-01-01 00:57:27.688753 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 00:57:27.688767 | orchestrator | 2.16.14 2026-01-01 00:57:27.688780 | orchestrator | 2026-01-01 00:57:27.688791 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-01 00:57:27.688802 | orchestrator | 2026-01-01 00:57:27.688813 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-01 00:57:27.688824 | orchestrator | Thursday 01 January 2026 00:45:35 +0000 (0:00:00.883) 0:00:00.883 ****** 2026-01-01 00:57:27.688871 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.689093 | orchestrator | 2026-01-01 00:57:27.689113 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-01 00:57:27.689125 | orchestrator | Thursday 01 January 2026 00:45:36 +0000 (0:00:01.547) 0:00:02.431 ****** 2026-01-01 00:57:27.689136 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.689147 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.689157 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.689168 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.689179 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.689191 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.689204 | orchestrator | 2026-01-01 00:57:27.689217 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-01 00:57:27.689255 | orchestrator | Thursday 01 January 2026 00:45:38 +0000 (0:00:02.032) 0:00:04.464 ****** 2026-01-01 00:57:27.689269 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.689287 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.689301 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.689314 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.689327 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.689340 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.689352 | orchestrator | 2026-01-01 00:57:27.689365 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-01 00:57:27.689377 | orchestrator | Thursday 01 January 2026 00:45:39 +0000 (0:00:00.978) 0:00:05.443 ****** 2026-01-01 00:57:27.689433 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.689448 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.689461 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.689536 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.689553 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.689733 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.689751 | orchestrator | 2026-01-01 00:57:27.689770 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-01 00:57:27.689788 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:01.243) 0:00:06.686 ****** 2026-01-01 00:57:27.689807 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.689833 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.689850 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.689869 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.689887 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.689906 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.689962 | orchestrator | 2026-01-01 00:57:27.689976 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-01 00:57:27.689994 | orchestrator | Thursday 01 January 2026 00:45:41 +0000 (0:00:00.855) 0:00:07.541 ****** 2026-01-01 00:57:27.690014 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.690244 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.690291 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.690313 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.690332 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.690353 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.690373 | orchestrator | 2026-01-01 00:57:27.690391 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-01 00:57:27.690403 | orchestrator | Thursday 01 January 2026 00:45:42 +0000 (0:00:00.643) 0:00:08.185 ****** 2026-01-01 00:57:27.690414 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.690424 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.690434 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.690445 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.690741 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.690762 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.690775 | orchestrator | 2026-01-01 00:57:27.690786 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-01 00:57:27.690796 | orchestrator | Thursday 01 January 2026 00:45:43 +0000 (0:00:01.278) 0:00:09.464 ****** 2026-01-01 00:57:27.690808 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.690819 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.690830 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.690840 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.690851 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.690861 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.690872 | orchestrator | 2026-01-01 00:57:27.690882 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-01 00:57:27.690906 | orchestrator | Thursday 01 January 2026 00:45:44 +0000 (0:00:00.790) 0:00:10.254 ****** 2026-01-01 00:57:27.690917 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.690928 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.690938 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.690949 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.690959 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.690969 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.690980 | orchestrator | 2026-01-01 00:57:27.690991 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-01 00:57:27.691002 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.721) 0:00:10.976 ****** 2026-01-01 00:57:27.691019 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:57:27.691032 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:57:27.691042 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:57:27.691053 | orchestrator | 2026-01-01 00:57:27.691064 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-01 00:57:27.691075 | orchestrator | Thursday 01 January 2026 00:45:45 +0000 (0:00:00.501) 0:00:11.477 ****** 2026-01-01 00:57:27.691085 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.691096 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.691106 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.691140 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.691152 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.691163 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.691251 | orchestrator | 2026-01-01 00:57:27.691316 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-01 00:57:27.691450 | orchestrator | Thursday 01 January 2026 00:45:47 +0000 (0:00:01.755) 0:00:13.232 ****** 2026-01-01 00:57:27.691464 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:57:27.691483 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:57:27.691495 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:57:27.691506 | orchestrator | 2026-01-01 00:57:27.691517 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-01 00:57:27.691539 | orchestrator | Thursday 01 January 2026 00:45:51 +0000 (0:00:03.491) 0:00:16.723 ****** 2026-01-01 00:57:27.691549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 00:57:27.691560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 00:57:27.691571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 00:57:27.691644 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.691659 | orchestrator | 2026-01-01 00:57:27.691671 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-01 00:57:27.691681 | orchestrator | Thursday 01 January 2026 00:45:51 +0000 (0:00:00.553) 0:00:17.276 ****** 2026-01-01 00:57:27.691694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691731 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.691742 | orchestrator | 2026-01-01 00:57:27.691752 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-01 00:57:27.691763 | orchestrator | Thursday 01 January 2026 00:45:53 +0000 (0:00:01.627) 0:00:18.904 ****** 2026-01-01 00:57:27.691776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691819 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.691917 | orchestrator | 2026-01-01 00:57:27.691935 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-01 00:57:27.691951 | orchestrator | Thursday 01 January 2026 00:45:54 +0000 (0:00:00.853) 0:00:19.758 ****** 2026-01-01 00:57:27.691976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-01 00:45:48.214171', 'end': '2026-01-01 00:45:48.514092', 'delta': '0:00:00.299921', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.691999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-01 00:45:49.798669', 'end': '2026-01-01 00:45:50.090391', 'delta': '0:00:00.291722', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.692012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-01 00:45:50.672305', 'end': '2026-01-01 00:45:50.963984', 'delta': '0:00:00.291679', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.692023 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692035 | orchestrator | 2026-01-01 00:57:27.692046 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-01 00:57:27.692056 | orchestrator | Thursday 01 January 2026 00:45:54 +0000 (0:00:00.285) 0:00:20.043 ****** 2026-01-01 00:57:27.692067 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.692079 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.692096 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.692115 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.692133 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.692152 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.692169 | orchestrator | 2026-01-01 00:57:27.692180 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-01 00:57:27.692191 | orchestrator | Thursday 01 January 2026 00:45:56 +0000 (0:00:02.108) 0:00:22.151 ****** 2026-01-01 00:57:27.692202 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.692212 | orchestrator | 2026-01-01 00:57:27.692223 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-01 00:57:27.692234 | orchestrator | Thursday 01 January 2026 00:45:57 +0000 (0:00:01.224) 0:00:23.376 ****** 2026-01-01 00:57:27.692244 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692255 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.692265 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.692276 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.692286 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.692297 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.692307 | orchestrator | 2026-01-01 00:57:27.692318 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-01 00:57:27.692328 | orchestrator | Thursday 01 January 2026 00:45:59 +0000 (0:00:01.996) 0:00:25.372 ****** 2026-01-01 00:57:27.692339 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692349 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.692360 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.692370 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.692381 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.692391 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.692402 | orchestrator | 2026-01-01 00:57:27.692420 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 00:57:27.692437 | orchestrator | Thursday 01 January 2026 00:46:01 +0000 (0:00:01.559) 0:00:26.932 ****** 2026-01-01 00:57:27.692447 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692458 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.692468 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.692479 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.692490 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.692500 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.692511 | orchestrator | 2026-01-01 00:57:27.692521 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-01 00:57:27.692532 | orchestrator | Thursday 01 January 2026 00:46:02 +0000 (0:00:01.450) 0:00:28.382 ****** 2026-01-01 00:57:27.692543 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692553 | orchestrator | 2026-01-01 00:57:27.692564 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-01 00:57:27.692575 | orchestrator | Thursday 01 January 2026 00:46:03 +0000 (0:00:00.243) 0:00:28.626 ****** 2026-01-01 00:57:27.692608 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692627 | orchestrator | 2026-01-01 00:57:27.692646 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 00:57:27.692666 | orchestrator | Thursday 01 January 2026 00:46:03 +0000 (0:00:00.460) 0:00:29.087 ****** 2026-01-01 00:57:27.692685 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692705 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.692724 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.692744 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.692755 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.692765 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.692776 | orchestrator | 2026-01-01 00:57:27.692787 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-01 00:57:27.692797 | orchestrator | Thursday 01 January 2026 00:46:04 +0000 (0:00:01.118) 0:00:30.205 ****** 2026-01-01 00:57:27.692808 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692818 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.692829 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.692839 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.692850 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.692860 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.692871 | orchestrator | 2026-01-01 00:57:27.692881 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-01 00:57:27.692892 | orchestrator | Thursday 01 January 2026 00:46:06 +0000 (0:00:01.504) 0:00:31.710 ****** 2026-01-01 00:57:27.692903 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.692913 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.692923 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.692934 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.692944 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.692955 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.692965 | orchestrator | 2026-01-01 00:57:27.692975 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-01 00:57:27.692986 | orchestrator | Thursday 01 January 2026 00:46:06 +0000 (0:00:00.783) 0:00:32.493 ****** 2026-01-01 00:57:27.692997 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.693007 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.693018 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.693047 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.693059 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.693069 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.693080 | orchestrator | 2026-01-01 00:57:27.693091 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-01 00:57:27.693111 | orchestrator | Thursday 01 January 2026 00:46:07 +0000 (0:00:01.044) 0:00:33.537 ****** 2026-01-01 00:57:27.693130 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.693141 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.693152 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.693162 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.693173 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.693184 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.693194 | orchestrator | 2026-01-01 00:57:27.693205 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-01 00:57:27.693216 | orchestrator | Thursday 01 January 2026 00:46:08 +0000 (0:00:00.729) 0:00:34.266 ****** 2026-01-01 00:57:27.693227 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.693237 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.693248 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.693258 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.693269 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.693280 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.693290 | orchestrator | 2026-01-01 00:57:27.693301 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-01 00:57:27.693312 | orchestrator | Thursday 01 January 2026 00:46:09 +0000 (0:00:00.820) 0:00:35.087 ****** 2026-01-01 00:57:27.693323 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.693334 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.693344 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.693355 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.693365 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.693376 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.693387 | orchestrator | 2026-01-01 00:57:27.693398 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-01 00:57:27.693408 | orchestrator | Thursday 01 January 2026 00:46:10 +0000 (0:00:00.817) 0:00:35.905 ****** 2026-01-01 00:57:27.693426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80', 'dm-uuid-LVM-SONYjeZN9GWLGHGRSqE9gmyFPBq2i8yFgW3LbyAuZAjtuiO9nidwiM15Zz4fgBgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99', 'dm-uuid-LVM-mNOe8BDEevDiieOx5pbseSYI92ft5O4Cmn3FTAdpuoRwUKtT432N6EvaDSN0TXAk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7', 'dm-uuid-LVM-fwhI3sFpUzo3WZy0vmQJML1CgRIk8v0dTREW6GKmoiy1t1hrsH0lOfVCAZbUSXx8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6KwxMN-8ZVb-ghoR-r4nZ-md12-SShe-JU1a8A', 'scsi-0QEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec', 'scsi-SQEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-buewbV-07X8-TKO4-J2HA-HHu5-FICq-7n30Rf', 'scsi-0QEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486', 'scsi-SQEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1', 'scsi-SQEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555', 'dm-uuid-LVM-ZxPtyy9M4L3rQExOpuVfQhUHxAUvkDO1GOfgxNBNkDww4BJylWY5eDdcKW6jqPiL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693826 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.693841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4', 'dm-uuid-LVM-jcGAUv83n2cFw4SkjqywuBaM26nHu2nzrBARK8Q6NIOTfqlkkSnEZoKKYb5yRhJ3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpSwcY-Tv7Z-ZbMx-2Azw-jH5E-jiVi-dVT9ng', 'scsi-0QEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0', 'scsi-SQEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce', 'dm-uuid-LVM-SdgupNqEp01AdaxqWCIDJUYHuls443yNnIKXlX0XsZXcY7Vqe0rjrVmyW8IbMBDs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.693966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K5uM70-yVwd-0zbA-82AT-IsIp-dFnv-jC7627', 'scsi-0QEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1', 'scsi-SQEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e', 'scsi-SQEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.693989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694074 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.694085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part1', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part14', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part15', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part16', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CGBgaN-Wq2R-0G1i-7R7M-nuLJ-oM7J-JKWrs0', 'scsi-0QEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d', 'scsi-SQEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V8spcX-bDDY-3Im3-h7v3-31EX-z9EY-oSFxce', 'scsi-0QEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0', 'scsi-SQEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb', 'scsi-SQEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694391 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.694402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694461 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.694482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694532 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.694543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:57:27.694665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:57:27.694774 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.694784 | orchestrator | 2026-01-01 00:57:27.694794 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-01 00:57:27.694804 | orchestrator | Thursday 01 January 2026 00:46:12 +0000 (0:00:01.864) 0:00:37.769 ****** 2026-01-01 00:57:27.694815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80', 'dm-uuid-LVM-SONYjeZN9GWLGHGRSqE9gmyFPBq2i8yFgW3LbyAuZAjtuiO9nidwiM15Zz4fgBgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99', 'dm-uuid-LVM-mNOe8BDEevDiieOx5pbseSYI92ft5O4Cmn3FTAdpuoRwUKtT432N6EvaDSN0TXAk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7', 'dm-uuid-LVM-fwhI3sFpUzo3WZy0vmQJML1CgRIk8v0dTREW6GKmoiy1t1hrsH0lOfVCAZbUSXx8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.694989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555', 'dm-uuid-LVM-ZxPtyy9M4L3rQExOpuVfQhUHxAUvkDO1GOfgxNBNkDww4BJylWY5eDdcKW6jqPiL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6KwxMN-8ZVb-ghoR-r4nZ-md12-SShe-JU1a8A', 'scsi-0QEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec', 'scsi-SQEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-buewbV-07X8-TKO4-J2HA-HHu5-FICq-7n30Rf', 'scsi-0QEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486', 'scsi-SQEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1', 'scsi-SQEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.695991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696038 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4', 'dm-uuid-LVM-jcGAUv83n2cFw4SkjqywuBaM26nHu2nzrBARK8Q6NIOTfqlkkSnEZoKKYb5yRhJ3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce', 'dm-uuid-LVM-SdgupNqEp01AdaxqWCIDJUYHuls443yNnIKXlX0XsZXcY7Vqe0rjrVmyW8IbMBDs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696187 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696223 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpSwcY-Tv7Z-ZbMx-2Azw-jH5E-jiVi-dVT9ng', 'scsi-0QEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0', 'scsi-SQEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K5uM70-yVwd-0zbA-82AT-IsIp-dFnv-jC7627', 'scsi-0QEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1', 'scsi-SQEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696362 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e', 'scsi-SQEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696509 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.696520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696697 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CGBgaN-Wq2R-0G1i-7R7M-nuLJ-oM7J-JKWrs0', 'scsi-0QEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d', 'scsi-SQEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V8spcX-bDDY-3Im3-h7v3-31EX-z9EY-oSFxce', 'scsi-0QEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0', 'scsi-SQEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb', 'scsi-SQEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696835 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696847 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696890 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696898 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696919 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.696995 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697018 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part1', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part14', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part15', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part16', 'scsi-SQEMU_QEMU_HARDDISK_19893af0-ead3-467d-b949-06e8d6b388df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697076 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697091 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.697103 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697118 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697126 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697135 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697173 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697182 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.697190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697199 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.697274 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697298 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697312 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb460ef8-794a-40a4-830a-8c8c7cea0001-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697371 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697384 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.697396 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697411 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697419 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697427 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697440 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697448 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697543 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697559 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part1', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part14', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part15', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part16', 'scsi-SQEMU_QEMU_HARDDISK_ef1fdaf7-633a-4cbe-8d1e-08dd2c1cc62a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697568 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:57:27.697576 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.697625 | orchestrator | 2026-01-01 00:57:27.697690 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-01 00:57:27.697706 | orchestrator | Thursday 01 January 2026 00:46:14 +0000 (0:00:01.912) 0:00:39.682 ****** 2026-01-01 00:57:27.697717 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.697726 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.697734 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.697742 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.697750 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.697758 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.697777 | orchestrator | 2026-01-01 00:57:27.697786 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-01 00:57:27.697794 | orchestrator | Thursday 01 January 2026 00:46:15 +0000 (0:00:01.258) 0:00:40.940 ****** 2026-01-01 00:57:27.697801 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.697809 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.697817 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.697825 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.697833 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.697840 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.697848 | orchestrator | 2026-01-01 00:57:27.697856 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 00:57:27.697864 | orchestrator | Thursday 01 January 2026 00:46:16 +0000 (0:00:01.087) 0:00:42.028 ****** 2026-01-01 00:57:27.697871 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.697879 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.697887 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.697895 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.697903 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.697911 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.697918 | orchestrator | 2026-01-01 00:57:27.697926 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 00:57:27.697934 | orchestrator | Thursday 01 January 2026 00:46:17 +0000 (0:00:01.440) 0:00:43.468 ****** 2026-01-01 00:57:27.697942 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.697949 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.697957 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.697965 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.697973 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.697980 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.697988 | orchestrator | 2026-01-01 00:57:27.697996 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 00:57:27.698004 | orchestrator | Thursday 01 January 2026 00:46:18 +0000 (0:00:00.729) 0:00:44.198 ****** 2026-01-01 00:57:27.698012 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698060 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.698069 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.698076 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.698084 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.698092 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.698100 | orchestrator | 2026-01-01 00:57:27.698108 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 00:57:27.698116 | orchestrator | Thursday 01 January 2026 00:46:19 +0000 (0:00:00.986) 0:00:45.185 ****** 2026-01-01 00:57:27.698124 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698131 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.698139 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.698147 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.698154 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.698162 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.698170 | orchestrator | 2026-01-01 00:57:27.698178 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-01 00:57:27.698186 | orchestrator | Thursday 01 January 2026 00:46:20 +0000 (0:00:01.328) 0:00:46.513 ****** 2026-01-01 00:57:27.698200 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-01 00:57:27.698208 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-01 00:57:27.698216 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-01 00:57:27.698224 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-01 00:57:27.698232 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-01 00:57:27.698239 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-01 00:57:27.698247 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 00:57:27.698255 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-01 00:57:27.698262 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-01 00:57:27.698270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-01 00:57:27.698278 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-01 00:57:27.698286 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-01 00:57:27.698294 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-01 00:57:27.698306 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-01 00:57:27.698314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-01 00:57:27.698324 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-01 00:57:27.698334 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-01 00:57:27.698344 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-01 00:57:27.698353 | orchestrator | 2026-01-01 00:57:27.698363 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-01 00:57:27.698372 | orchestrator | Thursday 01 January 2026 00:46:26 +0000 (0:00:05.410) 0:00:51.924 ****** 2026-01-01 00:57:27.698381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 00:57:27.698390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 00:57:27.698399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 00:57:27.698408 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698418 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-01 00:57:27.698427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-01 00:57:27.698440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-01 00:57:27.698453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-01 00:57:27.698499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-01 00:57:27.698511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-01 00:57:27.698522 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.698532 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.698540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:57:27.698549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:57:27.698557 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:57:27.698566 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.698575 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-01 00:57:27.698602 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-01 00:57:27.698612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-01 00:57:27.698620 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-01 00:57:27.698627 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.698635 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-01 00:57:27.698643 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-01 00:57:27.698650 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.698658 | orchestrator | 2026-01-01 00:57:27.698666 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-01 00:57:27.698673 | orchestrator | Thursday 01 January 2026 00:46:27 +0000 (0:00:01.472) 0:00:53.397 ****** 2026-01-01 00:57:27.698688 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.698696 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.698703 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.698712 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.698720 | orchestrator | 2026-01-01 00:57:27.698727 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-01 00:57:27.698736 | orchestrator | Thursday 01 January 2026 00:46:29 +0000 (0:00:01.622) 0:00:55.020 ****** 2026-01-01 00:57:27.698744 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698752 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.698759 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.698767 | orchestrator | 2026-01-01 00:57:27.698775 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-01 00:57:27.698782 | orchestrator | Thursday 01 January 2026 00:46:29 +0000 (0:00:00.561) 0:00:55.581 ****** 2026-01-01 00:57:27.698790 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698798 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.698805 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.698813 | orchestrator | 2026-01-01 00:57:27.698821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-01 00:57:27.698829 | orchestrator | Thursday 01 January 2026 00:46:30 +0000 (0:00:00.631) 0:00:56.212 ****** 2026-01-01 00:57:27.698836 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698844 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.698852 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.698859 | orchestrator | 2026-01-01 00:57:27.698867 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-01 00:57:27.698875 | orchestrator | Thursday 01 January 2026 00:46:31 +0000 (0:00:00.771) 0:00:56.984 ****** 2026-01-01 00:57:27.698883 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.698891 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.698898 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.698906 | orchestrator | 2026-01-01 00:57:27.698914 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-01 00:57:27.698922 | orchestrator | Thursday 01 January 2026 00:46:31 +0000 (0:00:00.506) 0:00:57.490 ****** 2026-01-01 00:57:27.698929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.698937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.698945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.698953 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.698960 | orchestrator | 2026-01-01 00:57:27.698968 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-01 00:57:27.698976 | orchestrator | Thursday 01 January 2026 00:46:32 +0000 (0:00:00.473) 0:00:57.963 ****** 2026-01-01 00:57:27.698983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.698991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.699003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.699011 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.699018 | orchestrator | 2026-01-01 00:57:27.699026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-01 00:57:27.699034 | orchestrator | Thursday 01 January 2026 00:46:32 +0000 (0:00:00.640) 0:00:58.604 ****** 2026-01-01 00:57:27.699042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.699049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.699057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.699065 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.699072 | orchestrator | 2026-01-01 00:57:27.699080 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-01 00:57:27.699093 | orchestrator | Thursday 01 January 2026 00:46:33 +0000 (0:00:00.576) 0:00:59.181 ****** 2026-01-01 00:57:27.699101 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.699108 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.699116 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.699124 | orchestrator | 2026-01-01 00:57:27.699131 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-01 00:57:27.699139 | orchestrator | Thursday 01 January 2026 00:46:33 +0000 (0:00:00.433) 0:00:59.614 ****** 2026-01-01 00:57:27.699147 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 00:57:27.699155 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 00:57:27.699187 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-01 00:57:27.699197 | orchestrator | 2026-01-01 00:57:27.699205 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-01 00:57:27.699213 | orchestrator | Thursday 01 January 2026 00:46:35 +0000 (0:00:01.095) 0:01:00.709 ****** 2026-01-01 00:57:27.699221 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:57:27.699229 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:57:27.699237 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:57:27.699244 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 00:57:27.699252 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 00:57:27.699260 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 00:57:27.699268 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 00:57:27.699276 | orchestrator | 2026-01-01 00:57:27.699284 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-01 00:57:27.699292 | orchestrator | Thursday 01 January 2026 00:46:35 +0000 (0:00:00.882) 0:01:01.592 ****** 2026-01-01 00:57:27.699300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:57:27.699307 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:57:27.699315 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:57:27.699323 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 00:57:27.699331 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 00:57:27.699339 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 00:57:27.699347 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 00:57:27.699355 | orchestrator | 2026-01-01 00:57:27.699363 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.699371 | orchestrator | Thursday 01 January 2026 00:46:37 +0000 (0:00:01.948) 0:01:03.540 ****** 2026-01-01 00:57:27.699379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.699388 | orchestrator | 2026-01-01 00:57:27.699396 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.699404 | orchestrator | Thursday 01 January 2026 00:46:39 +0000 (0:00:01.395) 0:01:04.936 ****** 2026-01-01 00:57:27.699412 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.699421 | orchestrator | 2026-01-01 00:57:27.699429 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.699436 | orchestrator | Thursday 01 January 2026 00:46:41 +0000 (0:00:01.797) 0:01:06.734 ****** 2026-01-01 00:57:27.699450 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.699458 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.699466 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.699474 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.699482 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.699490 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.699497 | orchestrator | 2026-01-01 00:57:27.699505 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.699513 | orchestrator | Thursday 01 January 2026 00:46:42 +0000 (0:00:01.684) 0:01:08.418 ****** 2026-01-01 00:57:27.699521 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.699529 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.699537 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.699544 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.699552 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.699560 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.699568 | orchestrator | 2026-01-01 00:57:27.699576 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.699633 | orchestrator | Thursday 01 January 2026 00:46:44 +0000 (0:00:01.247) 0:01:09.666 ****** 2026-01-01 00:57:27.699643 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.699651 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.699659 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.699667 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.699674 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.699682 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.699690 | orchestrator | 2026-01-01 00:57:27.699698 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.699705 | orchestrator | Thursday 01 January 2026 00:46:45 +0000 (0:00:01.255) 0:01:10.922 ****** 2026-01-01 00:57:27.699713 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.699721 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.699728 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.699736 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.699744 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.699751 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.699759 | orchestrator | 2026-01-01 00:57:27.699767 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.699775 | orchestrator | Thursday 01 January 2026 00:46:46 +0000 (0:00:00.868) 0:01:11.791 ****** 2026-01-01 00:57:27.699783 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.699791 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.699798 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.699808 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.699822 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.699872 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.699884 | orchestrator | 2026-01-01 00:57:27.699892 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.699900 | orchestrator | Thursday 01 January 2026 00:46:47 +0000 (0:00:01.293) 0:01:13.084 ****** 2026-01-01 00:57:27.699908 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.699916 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.699922 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.699929 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.699935 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.699942 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.699949 | orchestrator | 2026-01-01 00:57:27.699955 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.699962 | orchestrator | Thursday 01 January 2026 00:46:48 +0000 (0:00:00.705) 0:01:13.790 ****** 2026-01-01 00:57:27.699969 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.699975 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.699982 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.699989 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.699995 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700006 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700013 | orchestrator | 2026-01-01 00:57:27.700019 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.700026 | orchestrator | Thursday 01 January 2026 00:46:49 +0000 (0:00:00.946) 0:01:14.737 ****** 2026-01-01 00:57:27.700033 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700039 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700046 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700053 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.700059 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.700066 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.700073 | orchestrator | 2026-01-01 00:57:27.700079 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.700086 | orchestrator | Thursday 01 January 2026 00:46:50 +0000 (0:00:01.258) 0:01:15.996 ****** 2026-01-01 00:57:27.700093 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700099 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700106 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700113 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.700119 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.700126 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.700132 | orchestrator | 2026-01-01 00:57:27.700139 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.700146 | orchestrator | Thursday 01 January 2026 00:46:51 +0000 (0:00:01.421) 0:01:17.417 ****** 2026-01-01 00:57:27.700153 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.700159 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.700166 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.700172 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700179 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700186 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700192 | orchestrator | 2026-01-01 00:57:27.700199 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.700205 | orchestrator | Thursday 01 January 2026 00:46:52 +0000 (0:00:00.589) 0:01:18.007 ****** 2026-01-01 00:57:27.700212 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.700219 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.700225 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.700232 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.700238 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.700245 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.700252 | orchestrator | 2026-01-01 00:57:27.700258 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.700265 | orchestrator | Thursday 01 January 2026 00:46:53 +0000 (0:00:01.024) 0:01:19.031 ****** 2026-01-01 00:57:27.700272 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700278 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700285 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700292 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700298 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700305 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700311 | orchestrator | 2026-01-01 00:57:27.700318 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.700325 | orchestrator | Thursday 01 January 2026 00:46:54 +0000 (0:00:00.621) 0:01:19.652 ****** 2026-01-01 00:57:27.700331 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700338 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700344 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700351 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700358 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700364 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700371 | orchestrator | 2026-01-01 00:57:27.700377 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.700387 | orchestrator | Thursday 01 January 2026 00:46:54 +0000 (0:00:00.795) 0:01:20.448 ****** 2026-01-01 00:57:27.700398 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700405 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700412 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700419 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700425 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700432 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700438 | orchestrator | 2026-01-01 00:57:27.700445 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.700452 | orchestrator | Thursday 01 January 2026 00:46:55 +0000 (0:00:00.959) 0:01:21.407 ****** 2026-01-01 00:57:27.700458 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.700465 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.700472 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.700478 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700484 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700491 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700498 | orchestrator | 2026-01-01 00:57:27.700504 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.700511 | orchestrator | Thursday 01 January 2026 00:46:56 +0000 (0:00:00.969) 0:01:22.377 ****** 2026-01-01 00:57:27.700518 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.700524 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.700531 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.700537 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700562 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700570 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700576 | orchestrator | 2026-01-01 00:57:27.700596 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.700604 | orchestrator | Thursday 01 January 2026 00:46:57 +0000 (0:00:00.795) 0:01:23.173 ****** 2026-01-01 00:57:27.700610 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.700617 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.700624 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.700630 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.700637 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.700643 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.700650 | orchestrator | 2026-01-01 00:57:27.700657 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.700663 | orchestrator | Thursday 01 January 2026 00:46:58 +0000 (0:00:00.811) 0:01:23.984 ****** 2026-01-01 00:57:27.700670 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700676 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700683 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700689 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.700696 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.700702 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.700709 | orchestrator | 2026-01-01 00:57:27.700716 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.700722 | orchestrator | Thursday 01 January 2026 00:46:59 +0000 (0:00:00.932) 0:01:24.916 ****** 2026-01-01 00:57:27.700729 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.700735 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.700742 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.700748 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.700755 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.700761 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.700768 | orchestrator | 2026-01-01 00:57:27.700774 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-01 00:57:27.700781 | orchestrator | Thursday 01 January 2026 00:47:00 +0000 (0:00:01.203) 0:01:26.120 ****** 2026-01-01 00:57:27.700787 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.700794 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.700800 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.700807 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.700818 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.700824 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.700831 | orchestrator | 2026-01-01 00:57:27.700838 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-01 00:57:27.700844 | orchestrator | Thursday 01 January 2026 00:47:02 +0000 (0:00:01.659) 0:01:27.779 ****** 2026-01-01 00:57:27.700851 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.700857 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.700864 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.700870 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.700877 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.700883 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.700890 | orchestrator | 2026-01-01 00:57:27.700896 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-01 00:57:27.700903 | orchestrator | Thursday 01 January 2026 00:47:04 +0000 (0:00:02.628) 0:01:30.408 ****** 2026-01-01 00:57:27.700910 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.700917 | orchestrator | 2026-01-01 00:57:27.700923 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-01 00:57:27.700930 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:01.361) 0:01:31.769 ****** 2026-01-01 00:57:27.700936 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.700943 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.700949 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.700956 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.700962 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.700969 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.700975 | orchestrator | 2026-01-01 00:57:27.700982 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-01 00:57:27.700988 | orchestrator | Thursday 01 January 2026 00:47:06 +0000 (0:00:00.577) 0:01:32.347 ****** 2026-01-01 00:57:27.700995 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701002 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701008 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701015 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701021 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701028 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701034 | orchestrator | 2026-01-01 00:57:27.701041 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-01 00:57:27.701051 | orchestrator | Thursday 01 January 2026 00:47:07 +0000 (0:00:00.908) 0:01:33.255 ****** 2026-01-01 00:57:27.701058 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 00:57:27.701064 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 00:57:27.701071 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 00:57:27.701078 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 00:57:27.701084 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 00:57:27.701091 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-01 00:57:27.701097 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 00:57:27.701104 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 00:57:27.701110 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 00:57:27.701117 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 00:57:27.701144 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 00:57:27.701160 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-01 00:57:27.701167 | orchestrator | 2026-01-01 00:57:27.701174 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-01 00:57:27.701180 | orchestrator | Thursday 01 January 2026 00:47:08 +0000 (0:00:01.353) 0:01:34.609 ****** 2026-01-01 00:57:27.701187 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.701194 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.701200 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.701207 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.701214 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.701220 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.701227 | orchestrator | 2026-01-01 00:57:27.701233 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-01 00:57:27.701240 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:01.320) 0:01:35.929 ****** 2026-01-01 00:57:27.701247 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701253 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701260 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701267 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701273 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701280 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701286 | orchestrator | 2026-01-01 00:57:27.701293 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-01 00:57:27.701300 | orchestrator | Thursday 01 January 2026 00:47:10 +0000 (0:00:00.604) 0:01:36.534 ****** 2026-01-01 00:57:27.701306 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701313 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701319 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701326 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701332 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701339 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701345 | orchestrator | 2026-01-01 00:57:27.701352 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-01 00:57:27.701359 | orchestrator | Thursday 01 January 2026 00:47:11 +0000 (0:00:00.759) 0:01:37.294 ****** 2026-01-01 00:57:27.701365 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701372 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701378 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701385 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701392 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701398 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701405 | orchestrator | 2026-01-01 00:57:27.701411 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-01 00:57:27.701418 | orchestrator | Thursday 01 January 2026 00:47:12 +0000 (0:00:00.586) 0:01:37.880 ****** 2026-01-01 00:57:27.701425 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.701432 | orchestrator | 2026-01-01 00:57:27.701438 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-01 00:57:27.701445 | orchestrator | Thursday 01 January 2026 00:47:13 +0000 (0:00:01.220) 0:01:39.101 ****** 2026-01-01 00:57:27.701451 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.701458 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.701465 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.701471 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.701478 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.701484 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.701491 | orchestrator | 2026-01-01 00:57:27.701498 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-01 00:57:27.701504 | orchestrator | Thursday 01 January 2026 00:48:09 +0000 (0:00:56.461) 0:02:35.563 ****** 2026-01-01 00:57:27.701511 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 00:57:27.701521 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 00:57:27.701528 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 00:57:27.701535 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701542 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 00:57:27.701548 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 00:57:27.701558 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 00:57:27.701565 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701571 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 00:57:27.701578 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 00:57:27.701599 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 00:57:27.701611 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701623 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 00:57:27.701634 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 00:57:27.701645 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 00:57:27.701651 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701658 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 00:57:27.701665 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 00:57:27.701671 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 00:57:27.701678 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701706 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-01 00:57:27.701714 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-01 00:57:27.701721 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-01 00:57:27.701727 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701734 | orchestrator | 2026-01-01 00:57:27.701741 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-01 00:57:27.701747 | orchestrator | Thursday 01 January 2026 00:48:10 +0000 (0:00:00.836) 0:02:36.399 ****** 2026-01-01 00:57:27.701754 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701761 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701767 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701774 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701780 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701787 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701793 | orchestrator | 2026-01-01 00:57:27.701800 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-01 00:57:27.701807 | orchestrator | Thursday 01 January 2026 00:48:11 +0000 (0:00:01.034) 0:02:37.434 ****** 2026-01-01 00:57:27.701813 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701820 | orchestrator | 2026-01-01 00:57:27.701827 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-01 00:57:27.701833 | orchestrator | Thursday 01 January 2026 00:48:11 +0000 (0:00:00.154) 0:02:37.588 ****** 2026-01-01 00:57:27.701840 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701846 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701853 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701860 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701866 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701873 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701879 | orchestrator | 2026-01-01 00:57:27.701886 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-01 00:57:27.701898 | orchestrator | Thursday 01 January 2026 00:48:12 +0000 (0:00:00.741) 0:02:38.330 ****** 2026-01-01 00:57:27.701904 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701911 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701918 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701924 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701931 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701937 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.701944 | orchestrator | 2026-01-01 00:57:27.701951 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-01 00:57:27.701957 | orchestrator | Thursday 01 January 2026 00:48:13 +0000 (0:00:01.150) 0:02:39.480 ****** 2026-01-01 00:57:27.701964 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.701970 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.701977 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.701983 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.701990 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.701997 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702003 | orchestrator | 2026-01-01 00:57:27.702010 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-01 00:57:27.702037 | orchestrator | Thursday 01 January 2026 00:48:14 +0000 (0:00:00.762) 0:02:40.243 ****** 2026-01-01 00:57:27.702045 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.702056 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.702067 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.702079 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.702091 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.702103 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.702115 | orchestrator | 2026-01-01 00:57:27.702127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-01 00:57:27.702134 | orchestrator | Thursday 01 January 2026 00:48:17 +0000 (0:00:02.942) 0:02:43.186 ****** 2026-01-01 00:57:27.702141 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.702147 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.702154 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.702160 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.702166 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.702173 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.702179 | orchestrator | 2026-01-01 00:57:27.702186 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-01 00:57:27.702192 | orchestrator | Thursday 01 January 2026 00:48:18 +0000 (0:00:00.730) 0:02:43.916 ****** 2026-01-01 00:57:27.702199 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.702207 | orchestrator | 2026-01-01 00:57:27.702213 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-01 00:57:27.702223 | orchestrator | Thursday 01 January 2026 00:48:19 +0000 (0:00:01.469) 0:02:45.385 ****** 2026-01-01 00:57:27.702230 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702237 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702243 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702250 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702256 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702263 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702269 | orchestrator | 2026-01-01 00:57:27.702276 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-01 00:57:27.702282 | orchestrator | Thursday 01 January 2026 00:48:20 +0000 (0:00:01.106) 0:02:46.492 ****** 2026-01-01 00:57:27.702289 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702295 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702302 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702308 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702315 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702337 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702344 | orchestrator | 2026-01-01 00:57:27.702350 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-01 00:57:27.702357 | orchestrator | Thursday 01 January 2026 00:48:21 +0000 (0:00:01.020) 0:02:47.513 ****** 2026-01-01 00:57:27.702363 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702370 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702399 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702407 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702413 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702420 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702426 | orchestrator | 2026-01-01 00:57:27.702433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-01 00:57:27.702440 | orchestrator | Thursday 01 January 2026 00:48:22 +0000 (0:00:00.927) 0:02:48.440 ****** 2026-01-01 00:57:27.702447 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702453 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702460 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702466 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702473 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702479 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702486 | orchestrator | 2026-01-01 00:57:27.702493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-01 00:57:27.702499 | orchestrator | Thursday 01 January 2026 00:48:23 +0000 (0:00:00.640) 0:02:49.081 ****** 2026-01-01 00:57:27.702506 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702513 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702519 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702526 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702533 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702539 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702546 | orchestrator | 2026-01-01 00:57:27.702553 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-01 00:57:27.702559 | orchestrator | Thursday 01 January 2026 00:48:24 +0000 (0:00:00.747) 0:02:49.829 ****** 2026-01-01 00:57:27.702566 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702573 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702579 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702632 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702643 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702650 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702657 | orchestrator | 2026-01-01 00:57:27.702663 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-01 00:57:27.702670 | orchestrator | Thursday 01 January 2026 00:48:24 +0000 (0:00:00.759) 0:02:50.589 ****** 2026-01-01 00:57:27.702676 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702683 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702689 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702696 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702702 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702709 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702715 | orchestrator | 2026-01-01 00:57:27.702722 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-01 00:57:27.702728 | orchestrator | Thursday 01 January 2026 00:48:25 +0000 (0:00:00.842) 0:02:51.431 ****** 2026-01-01 00:57:27.702735 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.702741 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.702748 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.702754 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.702761 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.702767 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.702774 | orchestrator | 2026-01-01 00:57:27.702780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-01 00:57:27.702792 | orchestrator | Thursday 01 January 2026 00:48:26 +0000 (0:00:01.099) 0:02:52.531 ****** 2026-01-01 00:57:27.702799 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.702806 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.702812 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.702819 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.702825 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.702832 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.702838 | orchestrator | 2026-01-01 00:57:27.702845 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-01 00:57:27.702851 | orchestrator | Thursday 01 January 2026 00:48:28 +0000 (0:00:01.345) 0:02:53.876 ****** 2026-01-01 00:57:27.702858 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-2, testbed-node-1 2026-01-01 00:57:27.702865 | orchestrator | 2026-01-01 00:57:27.702872 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-01 00:57:27.702878 | orchestrator | Thursday 01 January 2026 00:48:29 +0000 (0:00:01.264) 0:02:55.141 ****** 2026-01-01 00:57:27.702884 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-01 00:57:27.702891 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-01 00:57:27.702897 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-01 00:57:27.702906 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-01 00:57:27.702913 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-01 00:57:27.702919 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-01 00:57:27.702925 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-01 00:57:27.702931 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-01 00:57:27.702937 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-01 00:57:27.702943 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-01 00:57:27.702949 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-01 00:57:27.702955 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-01 00:57:27.702961 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-01 00:57:27.702967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-01 00:57:27.702973 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-01 00:57:27.702979 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-01 00:57:27.702985 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-01 00:57:27.702992 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-01 00:57:27.703019 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-01 00:57:27.703026 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-01 00:57:27.703032 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-01 00:57:27.703038 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-01 00:57:27.703044 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-01 00:57:27.703051 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-01 00:57:27.703057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-01 00:57:27.703063 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-01 00:57:27.703069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-01 00:57:27.703075 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-01 00:57:27.703081 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-01 00:57:27.703087 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-01 00:57:27.703093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-01 00:57:27.703103 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-01 00:57:27.703110 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-01 00:57:27.703116 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-01 00:57:27.703122 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-01 00:57:27.703128 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-01 00:57:27.703134 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-01 00:57:27.703140 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-01 00:57:27.703146 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-01 00:57:27.703152 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-01 00:57:27.703158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-01 00:57:27.703165 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-01 00:57:27.703171 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-01 00:57:27.703177 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-01 00:57:27.703183 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 00:57:27.703189 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 00:57:27.703195 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-01 00:57:27.703201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-01 00:57:27.703207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-01 00:57:27.703213 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 00:57:27.703219 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 00:57:27.703225 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 00:57:27.703231 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-01 00:57:27.703237 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 00:57:27.703243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 00:57:27.703250 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 00:57:27.703256 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 00:57:27.703262 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 00:57:27.703268 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-01 00:57:27.703274 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 00:57:27.703280 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 00:57:27.703286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 00:57:27.703292 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 00:57:27.703298 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 00:57:27.703307 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-01 00:57:27.703313 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 00:57:27.703319 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 00:57:27.703325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 00:57:27.703331 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 00:57:27.703337 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-01 00:57:27.703344 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 00:57:27.703350 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 00:57:27.703360 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 00:57:27.703366 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 00:57:27.703372 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-01 00:57:27.703378 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 00:57:27.703401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 00:57:27.703408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 00:57:27.703415 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 00:57:27.703421 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-01 00:57:27.703427 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-01 00:57:27.703433 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 00:57:27.703439 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-01 00:57:27.703445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 00:57:27.703451 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 00:57:27.703457 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-01 00:57:27.703463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-01 00:57:27.703469 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-01 00:57:27.703476 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-01 00:57:27.703482 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-01 00:57:27.703488 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-01 00:57:27.703494 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-01 00:57:27.703500 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-01 00:57:27.703506 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-01 00:57:27.703512 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-01 00:57:27.703518 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-01 00:57:27.703524 | orchestrator | 2026-01-01 00:57:27.703531 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-01 00:57:27.703537 | orchestrator | Thursday 01 January 2026 00:48:37 +0000 (0:00:07.675) 0:03:02.817 ****** 2026-01-01 00:57:27.703543 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.703549 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.703555 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.703562 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-01-01 00:57:27.703568 | orchestrator | 2026-01-01 00:57:27.703574 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-01 00:57:27.703595 | orchestrator | Thursday 01 January 2026 00:48:38 +0000 (0:00:01.030) 0:03:03.847 ****** 2026-01-01 00:57:27.703603 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.703609 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.703615 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.703621 | orchestrator | 2026-01-01 00:57:27.703627 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-01 00:57:27.703634 | orchestrator | Thursday 01 January 2026 00:48:39 +0000 (0:00:01.293) 0:03:05.141 ****** 2026-01-01 00:57:27.703640 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.703651 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.703657 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.703663 | orchestrator | 2026-01-01 00:57:27.703669 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-01 00:57:27.703675 | orchestrator | Thursday 01 January 2026 00:48:41 +0000 (0:00:01.573) 0:03:06.714 ****** 2026-01-01 00:57:27.703681 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.703688 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.704446 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.704466 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704473 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704481 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704488 | orchestrator | 2026-01-01 00:57:27.704496 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-01 00:57:27.704503 | orchestrator | Thursday 01 January 2026 00:48:42 +0000 (0:00:01.194) 0:03:07.909 ****** 2026-01-01 00:57:27.704510 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.704517 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.704525 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.704532 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704539 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704546 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704553 | orchestrator | 2026-01-01 00:57:27.704561 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-01 00:57:27.704568 | orchestrator | Thursday 01 January 2026 00:48:43 +0000 (0:00:01.349) 0:03:09.259 ****** 2026-01-01 00:57:27.704575 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.704598 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.704606 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.704613 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704620 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704628 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704635 | orchestrator | 2026-01-01 00:57:27.704678 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-01 00:57:27.704686 | orchestrator | Thursday 01 January 2026 00:48:44 +0000 (0:00:00.963) 0:03:10.222 ****** 2026-01-01 00:57:27.704694 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.704702 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.704709 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.704716 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704722 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704728 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704734 | orchestrator | 2026-01-01 00:57:27.704740 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-01 00:57:27.704746 | orchestrator | Thursday 01 January 2026 00:48:45 +0000 (0:00:01.086) 0:03:11.308 ****** 2026-01-01 00:57:27.704752 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.704758 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.704764 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.704770 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704777 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704783 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704789 | orchestrator | 2026-01-01 00:57:27.704795 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-01 00:57:27.704801 | orchestrator | Thursday 01 January 2026 00:48:46 +0000 (0:00:00.703) 0:03:12.011 ****** 2026-01-01 00:57:27.704807 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.704814 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.704820 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.704834 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704840 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704846 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704852 | orchestrator | 2026-01-01 00:57:27.704858 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-01 00:57:27.704865 | orchestrator | Thursday 01 January 2026 00:48:47 +0000 (0:00:01.024) 0:03:13.036 ****** 2026-01-01 00:57:27.704871 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.704877 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.704883 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.704889 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704895 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704901 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704907 | orchestrator | 2026-01-01 00:57:27.704913 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-01 00:57:27.704920 | orchestrator | Thursday 01 January 2026 00:48:48 +0000 (0:00:00.705) 0:03:13.742 ****** 2026-01-01 00:57:27.704926 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.704932 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.704938 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.704944 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704950 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704956 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704962 | orchestrator | 2026-01-01 00:57:27.704969 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-01 00:57:27.704975 | orchestrator | Thursday 01 January 2026 00:48:49 +0000 (0:00:00.978) 0:03:14.720 ****** 2026-01-01 00:57:27.704981 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.704987 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.704993 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.704999 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.705005 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.705011 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.705017 | orchestrator | 2026-01-01 00:57:27.705024 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-01 00:57:27.705030 | orchestrator | Thursday 01 January 2026 00:48:52 +0000 (0:00:03.162) 0:03:17.882 ****** 2026-01-01 00:57:27.705036 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.705042 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.705048 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.705054 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705060 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705066 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705073 | orchestrator | 2026-01-01 00:57:27.705079 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-01 00:57:27.705085 | orchestrator | Thursday 01 January 2026 00:48:53 +0000 (0:00:01.202) 0:03:19.085 ****** 2026-01-01 00:57:27.705091 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.705097 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.705104 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.705110 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705116 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705122 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705128 | orchestrator | 2026-01-01 00:57:27.705148 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-01 00:57:27.705155 | orchestrator | Thursday 01 January 2026 00:48:54 +0000 (0:00:00.772) 0:03:19.857 ****** 2026-01-01 00:57:27.705161 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705167 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705173 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705180 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705186 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705199 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705206 | orchestrator | 2026-01-01 00:57:27.705212 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-01 00:57:27.705218 | orchestrator | Thursday 01 January 2026 00:48:55 +0000 (0:00:01.028) 0:03:20.885 ****** 2026-01-01 00:57:27.705224 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.705231 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.705237 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.705243 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705273 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705285 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705295 | orchestrator | 2026-01-01 00:57:27.705305 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-01 00:57:27.705315 | orchestrator | Thursday 01 January 2026 00:48:56 +0000 (0:00:00.766) 0:03:21.651 ****** 2026-01-01 00:57:27.705322 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-01 00:57:27.705330 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-01 00:57:27.705337 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705343 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-01 00:57:27.705349 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-01 00:57:27.705356 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705362 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-01 00:57:27.705368 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-01 00:57:27.705374 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705380 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705386 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705392 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705398 | orchestrator | 2026-01-01 00:57:27.705404 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-01 00:57:27.705411 | orchestrator | Thursday 01 January 2026 00:48:57 +0000 (0:00:01.685) 0:03:23.337 ****** 2026-01-01 00:57:27.705417 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705423 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705433 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705439 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705445 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705451 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705457 | orchestrator | 2026-01-01 00:57:27.705464 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-01 00:57:27.705470 | orchestrator | Thursday 01 January 2026 00:48:58 +0000 (0:00:00.867) 0:03:24.205 ****** 2026-01-01 00:57:27.705476 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705482 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705488 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705494 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705500 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705509 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705515 | orchestrator | 2026-01-01 00:57:27.705521 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-01 00:57:27.705528 | orchestrator | Thursday 01 January 2026 00:48:59 +0000 (0:00:01.112) 0:03:25.317 ****** 2026-01-01 00:57:27.705534 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705540 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705546 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705552 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705558 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705564 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705570 | orchestrator | 2026-01-01 00:57:27.705576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-01 00:57:27.705598 | orchestrator | Thursday 01 January 2026 00:49:00 +0000 (0:00:00.959) 0:03:26.276 ****** 2026-01-01 00:57:27.705605 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705611 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705617 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705623 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705629 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705635 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705641 | orchestrator | 2026-01-01 00:57:27.705647 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-01 00:57:27.705673 | orchestrator | Thursday 01 January 2026 00:49:01 +0000 (0:00:01.057) 0:03:27.334 ****** 2026-01-01 00:57:27.705680 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705687 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.705693 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.705699 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705705 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705711 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705718 | orchestrator | 2026-01-01 00:57:27.705724 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-01 00:57:27.705730 | orchestrator | Thursday 01 January 2026 00:49:02 +0000 (0:00:00.637) 0:03:27.971 ****** 2026-01-01 00:57:27.705736 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.705742 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.705752 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705762 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.705773 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705784 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705795 | orchestrator | 2026-01-01 00:57:27.705802 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-01 00:57:27.705808 | orchestrator | Thursday 01 January 2026 00:49:03 +0000 (0:00:00.882) 0:03:28.853 ****** 2026-01-01 00:57:27.705814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.705820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.705826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.705837 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705843 | orchestrator | 2026-01-01 00:57:27.705849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-01 00:57:27.705855 | orchestrator | Thursday 01 January 2026 00:49:03 +0000 (0:00:00.316) 0:03:29.170 ****** 2026-01-01 00:57:27.705861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.705867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.705873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.705880 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705886 | orchestrator | 2026-01-01 00:57:27.705892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-01 00:57:27.705898 | orchestrator | Thursday 01 January 2026 00:49:03 +0000 (0:00:00.301) 0:03:29.471 ****** 2026-01-01 00:57:27.705904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.705910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.705916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.705922 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.705928 | orchestrator | 2026-01-01 00:57:27.705934 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-01 00:57:27.705940 | orchestrator | Thursday 01 January 2026 00:49:04 +0000 (0:00:00.360) 0:03:29.832 ****** 2026-01-01 00:57:27.705946 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.705952 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.705958 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.705964 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.705970 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.705976 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.705982 | orchestrator | 2026-01-01 00:57:27.705988 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-01 00:57:27.705995 | orchestrator | Thursday 01 January 2026 00:49:04 +0000 (0:00:00.540) 0:03:30.372 ****** 2026-01-01 00:57:27.706001 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 00:57:27.706007 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 00:57:27.706013 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-01 00:57:27.706052 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.706058 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-01 00:57:27.706064 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-01 00:57:27.706070 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.706076 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-01 00:57:27.706082 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.706088 | orchestrator | 2026-01-01 00:57:27.706095 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-01 00:57:27.706101 | orchestrator | Thursday 01 January 2026 00:49:06 +0000 (0:00:01.742) 0:03:32.115 ****** 2026-01-01 00:57:27.706107 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.706113 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.706119 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.706125 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.706131 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.706141 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.706147 | orchestrator | 2026-01-01 00:57:27.706153 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 00:57:27.706159 | orchestrator | Thursday 01 January 2026 00:49:09 +0000 (0:00:02.726) 0:03:34.841 ****** 2026-01-01 00:57:27.706165 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.706171 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.706178 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.706184 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.706190 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.706196 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.706206 | orchestrator | 2026-01-01 00:57:27.706212 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-01 00:57:27.706218 | orchestrator | Thursday 01 January 2026 00:49:10 +0000 (0:00:01.188) 0:03:36.029 ****** 2026-01-01 00:57:27.706224 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706230 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.706236 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.706243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.706249 | orchestrator | 2026-01-01 00:57:27.706255 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-01 00:57:27.706281 | orchestrator | Thursday 01 January 2026 00:49:11 +0000 (0:00:01.277) 0:03:37.307 ****** 2026-01-01 00:57:27.706289 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.706295 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.706301 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.706307 | orchestrator | 2026-01-01 00:57:27.706314 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-01 00:57:27.706320 | orchestrator | Thursday 01 January 2026 00:49:12 +0000 (0:00:00.414) 0:03:37.722 ****** 2026-01-01 00:57:27.706326 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.706332 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.706339 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.706345 | orchestrator | 2026-01-01 00:57:27.706351 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-01 00:57:27.706357 | orchestrator | Thursday 01 January 2026 00:49:13 +0000 (0:00:01.151) 0:03:38.874 ****** 2026-01-01 00:57:27.706363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:57:27.706370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:57:27.706376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:57:27.706382 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.706388 | orchestrator | 2026-01-01 00:57:27.706394 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-01 00:57:27.706401 | orchestrator | Thursday 01 January 2026 00:49:14 +0000 (0:00:01.261) 0:03:40.136 ****** 2026-01-01 00:57:27.706407 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.706413 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.706419 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.706426 | orchestrator | 2026-01-01 00:57:27.706432 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-01 00:57:27.706438 | orchestrator | Thursday 01 January 2026 00:49:14 +0000 (0:00:00.389) 0:03:40.525 ****** 2026-01-01 00:57:27.706444 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.706450 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.706457 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.706463 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.706469 | orchestrator | 2026-01-01 00:57:27.706475 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-01 00:57:27.706481 | orchestrator | Thursday 01 January 2026 00:49:16 +0000 (0:00:01.176) 0:03:41.701 ****** 2026-01-01 00:57:27.706488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.706494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.706502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.706513 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706524 | orchestrator | 2026-01-01 00:57:27.706531 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-01 00:57:27.706538 | orchestrator | Thursday 01 January 2026 00:49:16 +0000 (0:00:00.443) 0:03:42.144 ****** 2026-01-01 00:57:27.706544 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706550 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.706561 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.706567 | orchestrator | 2026-01-01 00:57:27.706573 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-01 00:57:27.706579 | orchestrator | Thursday 01 January 2026 00:49:16 +0000 (0:00:00.353) 0:03:42.497 ****** 2026-01-01 00:57:27.706630 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706640 | orchestrator | 2026-01-01 00:57:27.706647 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-01 00:57:27.706653 | orchestrator | Thursday 01 January 2026 00:49:17 +0000 (0:00:00.250) 0:03:42.748 ****** 2026-01-01 00:57:27.706659 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706665 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.706671 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.706677 | orchestrator | 2026-01-01 00:57:27.706684 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-01 00:57:27.706690 | orchestrator | Thursday 01 January 2026 00:49:17 +0000 (0:00:00.329) 0:03:43.077 ****** 2026-01-01 00:57:27.706696 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706702 | orchestrator | 2026-01-01 00:57:27.706708 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-01 00:57:27.706714 | orchestrator | Thursday 01 January 2026 00:49:17 +0000 (0:00:00.226) 0:03:43.304 ****** 2026-01-01 00:57:27.706720 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706727 | orchestrator | 2026-01-01 00:57:27.706733 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-01 00:57:27.706739 | orchestrator | Thursday 01 January 2026 00:49:17 +0000 (0:00:00.226) 0:03:43.530 ****** 2026-01-01 00:57:27.706748 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706755 | orchestrator | 2026-01-01 00:57:27.706761 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-01 00:57:27.706767 | orchestrator | Thursday 01 January 2026 00:49:18 +0000 (0:00:00.139) 0:03:43.670 ****** 2026-01-01 00:57:27.706773 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706779 | orchestrator | 2026-01-01 00:57:27.706785 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-01 00:57:27.706792 | orchestrator | Thursday 01 January 2026 00:49:18 +0000 (0:00:00.759) 0:03:44.429 ****** 2026-01-01 00:57:27.706798 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706804 | orchestrator | 2026-01-01 00:57:27.706810 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-01 00:57:27.706816 | orchestrator | Thursday 01 January 2026 00:49:19 +0000 (0:00:00.239) 0:03:44.669 ****** 2026-01-01 00:57:27.706822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.706828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.706834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.706840 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706846 | orchestrator | 2026-01-01 00:57:27.706853 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-01 00:57:27.706881 | orchestrator | Thursday 01 January 2026 00:49:19 +0000 (0:00:00.436) 0:03:45.105 ****** 2026-01-01 00:57:27.706888 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706895 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.706901 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.706907 | orchestrator | 2026-01-01 00:57:27.706913 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-01 00:57:27.706920 | orchestrator | Thursday 01 January 2026 00:49:19 +0000 (0:00:00.359) 0:03:45.464 ****** 2026-01-01 00:57:27.706926 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706932 | orchestrator | 2026-01-01 00:57:27.706938 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-01 00:57:27.706945 | orchestrator | Thursday 01 January 2026 00:49:20 +0000 (0:00:00.266) 0:03:45.731 ****** 2026-01-01 00:57:27.706951 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.706962 | orchestrator | 2026-01-01 00:57:27.706968 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-01 00:57:27.706974 | orchestrator | Thursday 01 January 2026 00:49:20 +0000 (0:00:00.278) 0:03:46.010 ****** 2026-01-01 00:57:27.706980 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.706987 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.706993 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.706999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.707005 | orchestrator | 2026-01-01 00:57:27.707011 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-01 00:57:27.707017 | orchestrator | Thursday 01 January 2026 00:49:21 +0000 (0:00:01.115) 0:03:47.125 ****** 2026-01-01 00:57:27.707024 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.707036 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.707049 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.707063 | orchestrator | 2026-01-01 00:57:27.707073 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-01 00:57:27.707085 | orchestrator | Thursday 01 January 2026 00:49:21 +0000 (0:00:00.381) 0:03:47.506 ****** 2026-01-01 00:57:27.707096 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.707107 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.707118 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.707129 | orchestrator | 2026-01-01 00:57:27.707139 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-01 00:57:27.707150 | orchestrator | Thursday 01 January 2026 00:49:23 +0000 (0:00:01.343) 0:03:48.849 ****** 2026-01-01 00:57:27.707162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.707172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.707181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.707192 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.707201 | orchestrator | 2026-01-01 00:57:27.707210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-01 00:57:27.707216 | orchestrator | Thursday 01 January 2026 00:49:24 +0000 (0:00:01.417) 0:03:50.267 ****** 2026-01-01 00:57:27.707221 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.707227 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.707232 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.707237 | orchestrator | 2026-01-01 00:57:27.707242 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-01 00:57:27.707248 | orchestrator | Thursday 01 January 2026 00:49:25 +0000 (0:00:00.679) 0:03:50.946 ****** 2026-01-01 00:57:27.707253 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.707258 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.707264 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.707269 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.707275 | orchestrator | 2026-01-01 00:57:27.707280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-01 00:57:27.707285 | orchestrator | Thursday 01 January 2026 00:49:26 +0000 (0:00:00.905) 0:03:51.852 ****** 2026-01-01 00:57:27.707290 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.707296 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.707301 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.707306 | orchestrator | 2026-01-01 00:57:27.707311 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-01 00:57:27.707317 | orchestrator | Thursday 01 January 2026 00:49:26 +0000 (0:00:00.480) 0:03:52.332 ****** 2026-01-01 00:57:27.707322 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.707328 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.707333 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.707338 | orchestrator | 2026-01-01 00:57:27.707347 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-01 00:57:27.707358 | orchestrator | Thursday 01 January 2026 00:49:27 +0000 (0:00:01.236) 0:03:53.569 ****** 2026-01-01 00:57:27.707363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.707368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.707373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.707379 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.707384 | orchestrator | 2026-01-01 00:57:27.707389 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-01 00:57:27.707395 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:00.553) 0:03:54.122 ****** 2026-01-01 00:57:27.707400 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.707405 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.707410 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.707416 | orchestrator | 2026-01-01 00:57:27.707421 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-01 00:57:27.707426 | orchestrator | Thursday 01 January 2026 00:49:28 +0000 (0:00:00.281) 0:03:54.404 ****** 2026-01-01 00:57:27.707432 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.707437 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.707442 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.707447 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.707453 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.707481 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.707488 | orchestrator | 2026-01-01 00:57:27.707493 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-01 00:57:27.707498 | orchestrator | Thursday 01 January 2026 00:49:29 +0000 (0:00:00.752) 0:03:55.157 ****** 2026-01-01 00:57:27.707504 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.707509 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.707515 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.707520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.707525 | orchestrator | 2026-01-01 00:57:27.707531 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-01 00:57:27.707536 | orchestrator | Thursday 01 January 2026 00:49:30 +0000 (0:00:00.726) 0:03:55.883 ****** 2026-01-01 00:57:27.707542 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.707547 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.707552 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.707558 | orchestrator | 2026-01-01 00:57:27.707563 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-01 00:57:27.707568 | orchestrator | Thursday 01 January 2026 00:49:30 +0000 (0:00:00.499) 0:03:56.383 ****** 2026-01-01 00:57:27.707574 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.707579 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.707602 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.707611 | orchestrator | 2026-01-01 00:57:27.707621 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-01 00:57:27.707630 | orchestrator | Thursday 01 January 2026 00:49:32 +0000 (0:00:01.446) 0:03:57.829 ****** 2026-01-01 00:57:27.707640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:57:27.707650 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:57:27.707655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:57:27.707661 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.707666 | orchestrator | 2026-01-01 00:57:27.707671 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-01 00:57:27.707677 | orchestrator | Thursday 01 January 2026 00:49:33 +0000 (0:00:00.926) 0:03:58.756 ****** 2026-01-01 00:57:27.707682 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.707687 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.707692 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.707702 | orchestrator | 2026-01-01 00:57:27.707707 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-01 00:57:27.707713 | orchestrator | 2026-01-01 00:57:27.707720 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.707730 | orchestrator | Thursday 01 January 2026 00:49:33 +0000 (0:00:00.593) 0:03:59.350 ****** 2026-01-01 00:57:27.707739 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.707748 | orchestrator | 2026-01-01 00:57:27.707757 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.707766 | orchestrator | Thursday 01 January 2026 00:49:34 +0000 (0:00:00.884) 0:04:00.234 ****** 2026-01-01 00:57:27.707775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.707783 | orchestrator | 2026-01-01 00:57:27.707791 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.707799 | orchestrator | Thursday 01 January 2026 00:49:35 +0000 (0:00:00.560) 0:04:00.794 ****** 2026-01-01 00:57:27.707807 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.707816 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.707824 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.707832 | orchestrator | 2026-01-01 00:57:27.707841 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.707851 | orchestrator | Thursday 01 January 2026 00:49:36 +0000 (0:00:01.125) 0:04:01.920 ****** 2026-01-01 00:57:27.707860 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.707870 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.707880 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.707885 | orchestrator | 2026-01-01 00:57:27.707891 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.707896 | orchestrator | Thursday 01 January 2026 00:49:36 +0000 (0:00:00.341) 0:04:02.262 ****** 2026-01-01 00:57:27.707901 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.707907 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.707912 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.707917 | orchestrator | 2026-01-01 00:57:27.707926 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.707932 | orchestrator | Thursday 01 January 2026 00:49:36 +0000 (0:00:00.325) 0:04:02.587 ****** 2026-01-01 00:57:27.707937 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.707942 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.707947 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.707953 | orchestrator | 2026-01-01 00:57:27.707958 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.707963 | orchestrator | Thursday 01 January 2026 00:49:37 +0000 (0:00:00.375) 0:04:02.962 ****** 2026-01-01 00:57:27.707969 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.707974 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.707979 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.707985 | orchestrator | 2026-01-01 00:57:27.707990 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.707995 | orchestrator | Thursday 01 January 2026 00:49:38 +0000 (0:00:01.182) 0:04:04.144 ****** 2026-01-01 00:57:27.708000 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708006 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708011 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708016 | orchestrator | 2026-01-01 00:57:27.708022 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.708027 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:00.550) 0:04:04.695 ****** 2026-01-01 00:57:27.708058 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708064 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708069 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708079 | orchestrator | 2026-01-01 00:57:27.708085 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.708090 | orchestrator | Thursday 01 January 2026 00:49:39 +0000 (0:00:00.358) 0:04:05.054 ****** 2026-01-01 00:57:27.708096 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708101 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708107 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708112 | orchestrator | 2026-01-01 00:57:27.708117 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.708123 | orchestrator | Thursday 01 January 2026 00:49:40 +0000 (0:00:00.999) 0:04:06.053 ****** 2026-01-01 00:57:27.708128 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708133 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708139 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708144 | orchestrator | 2026-01-01 00:57:27.708149 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.708155 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:00.984) 0:04:07.037 ****** 2026-01-01 00:57:27.708160 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708165 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708171 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708176 | orchestrator | 2026-01-01 00:57:27.708181 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.708187 | orchestrator | Thursday 01 January 2026 00:49:41 +0000 (0:00:00.277) 0:04:07.314 ****** 2026-01-01 00:57:27.708192 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708197 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708203 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708208 | orchestrator | 2026-01-01 00:57:27.708213 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.708219 | orchestrator | Thursday 01 January 2026 00:49:42 +0000 (0:00:00.359) 0:04:07.674 ****** 2026-01-01 00:57:27.708224 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708230 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708235 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708240 | orchestrator | 2026-01-01 00:57:27.708246 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.708251 | orchestrator | Thursday 01 January 2026 00:49:42 +0000 (0:00:00.351) 0:04:08.026 ****** 2026-01-01 00:57:27.708256 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708261 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708267 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708272 | orchestrator | 2026-01-01 00:57:27.708277 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.708283 | orchestrator | Thursday 01 January 2026 00:49:42 +0000 (0:00:00.302) 0:04:08.328 ****** 2026-01-01 00:57:27.708288 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708293 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708299 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708304 | orchestrator | 2026-01-01 00:57:27.708309 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.708315 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:00.649) 0:04:08.978 ****** 2026-01-01 00:57:27.708320 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708325 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708331 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708336 | orchestrator | 2026-01-01 00:57:27.708341 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.708346 | orchestrator | Thursday 01 January 2026 00:49:43 +0000 (0:00:00.305) 0:04:09.283 ****** 2026-01-01 00:57:27.708352 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708357 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.708362 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.708368 | orchestrator | 2026-01-01 00:57:27.708373 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.708382 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:00.351) 0:04:09.635 ****** 2026-01-01 00:57:27.708387 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708392 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708398 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708403 | orchestrator | 2026-01-01 00:57:27.708408 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.708414 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:00.317) 0:04:09.953 ****** 2026-01-01 00:57:27.708419 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708424 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708430 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708435 | orchestrator | 2026-01-01 00:57:27.708440 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.708451 | orchestrator | Thursday 01 January 2026 00:49:44 +0000 (0:00:00.576) 0:04:10.530 ****** 2026-01-01 00:57:27.708456 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708462 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708467 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708472 | orchestrator | 2026-01-01 00:57:27.708477 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-01 00:57:27.708483 | orchestrator | Thursday 01 January 2026 00:49:45 +0000 (0:00:00.725) 0:04:11.256 ****** 2026-01-01 00:57:27.708488 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708494 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708499 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708504 | orchestrator | 2026-01-01 00:57:27.708509 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-01 00:57:27.708515 | orchestrator | Thursday 01 January 2026 00:49:45 +0000 (0:00:00.331) 0:04:11.587 ****** 2026-01-01 00:57:27.708520 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.708526 | orchestrator | 2026-01-01 00:57:27.708531 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-01 00:57:27.708536 | orchestrator | Thursday 01 January 2026 00:49:46 +0000 (0:00:00.958) 0:04:12.545 ****** 2026-01-01 00:57:27.708542 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.708547 | orchestrator | 2026-01-01 00:57:27.708567 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-01 00:57:27.708574 | orchestrator | Thursday 01 January 2026 00:49:47 +0000 (0:00:00.241) 0:04:12.786 ****** 2026-01-01 00:57:27.708579 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-01 00:57:27.708599 | orchestrator | 2026-01-01 00:57:27.708605 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-01 00:57:27.708610 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:01.069) 0:04:13.856 ****** 2026-01-01 00:57:27.708616 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708621 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708626 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708632 | orchestrator | 2026-01-01 00:57:27.708637 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-01 00:57:27.708642 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:00.362) 0:04:14.218 ****** 2026-01-01 00:57:27.708648 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708653 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708658 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708664 | orchestrator | 2026-01-01 00:57:27.708669 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-01 00:57:27.708674 | orchestrator | Thursday 01 January 2026 00:49:48 +0000 (0:00:00.372) 0:04:14.591 ****** 2026-01-01 00:57:27.708680 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.708685 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.708690 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.708696 | orchestrator | 2026-01-01 00:57:27.708701 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-01 00:57:27.708710 | orchestrator | Thursday 01 January 2026 00:49:50 +0000 (0:00:01.524) 0:04:16.116 ****** 2026-01-01 00:57:27.708716 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.708721 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.708726 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.708732 | orchestrator | 2026-01-01 00:57:27.708737 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-01 00:57:27.708743 | orchestrator | Thursday 01 January 2026 00:49:51 +0000 (0:00:00.829) 0:04:16.945 ****** 2026-01-01 00:57:27.708748 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.708753 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.708758 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.708764 | orchestrator | 2026-01-01 00:57:27.708769 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-01 00:57:27.708774 | orchestrator | Thursday 01 January 2026 00:49:52 +0000 (0:00:00.912) 0:04:17.858 ****** 2026-01-01 00:57:27.708780 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708785 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708790 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708795 | orchestrator | 2026-01-01 00:57:27.708801 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-01 00:57:27.708806 | orchestrator | Thursday 01 January 2026 00:49:53 +0000 (0:00:00.940) 0:04:18.798 ****** 2026-01-01 00:57:27.708811 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.708817 | orchestrator | 2026-01-01 00:57:27.708822 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-01 00:57:27.708827 | orchestrator | Thursday 01 January 2026 00:49:55 +0000 (0:00:02.369) 0:04:21.168 ****** 2026-01-01 00:57:27.708832 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708838 | orchestrator | 2026-01-01 00:57:27.708843 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-01 00:57:27.708848 | orchestrator | Thursday 01 January 2026 00:49:56 +0000 (0:00:00.912) 0:04:22.080 ****** 2026-01-01 00:57:27.708854 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:57:27.708859 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.708864 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.708870 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:57:27.708875 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-01 00:57:27.708880 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:57:27.708886 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:57:27.708891 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-01 00:57:27.708896 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-01 00:57:27.708902 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-01 00:57:27.708907 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:57:27.708912 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-01 00:57:27.708918 | orchestrator | 2026-01-01 00:57:27.708926 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-01 00:57:27.708931 | orchestrator | Thursday 01 January 2026 00:50:00 +0000 (0:00:04.502) 0:04:26.583 ****** 2026-01-01 00:57:27.708936 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.708942 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.708947 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.708952 | orchestrator | 2026-01-01 00:57:27.708957 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-01 00:57:27.708963 | orchestrator | Thursday 01 January 2026 00:50:02 +0000 (0:00:01.477) 0:04:28.061 ****** 2026-01-01 00:57:27.708968 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.708973 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.708982 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.708987 | orchestrator | 2026-01-01 00:57:27.708992 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-01 00:57:27.708998 | orchestrator | Thursday 01 January 2026 00:50:02 +0000 (0:00:00.477) 0:04:28.538 ****** 2026-01-01 00:57:27.709003 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.709008 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.709014 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.709019 | orchestrator | 2026-01-01 00:57:27.709024 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-01 00:57:27.709030 | orchestrator | Thursday 01 January 2026 00:50:03 +0000 (0:00:00.720) 0:04:29.259 ****** 2026-01-01 00:57:27.709035 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.709056 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.709063 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.709068 | orchestrator | 2026-01-01 00:57:27.709075 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-01 00:57:27.709084 | orchestrator | Thursday 01 January 2026 00:50:05 +0000 (0:00:02.246) 0:04:31.506 ****** 2026-01-01 00:57:27.709094 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.709103 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.709113 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.709122 | orchestrator | 2026-01-01 00:57:27.709131 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-01 00:57:27.709140 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:01.479) 0:04:32.986 ****** 2026-01-01 00:57:27.709148 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709158 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.709167 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.709176 | orchestrator | 2026-01-01 00:57:27.709186 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-01 00:57:27.709195 | orchestrator | Thursday 01 January 2026 00:50:07 +0000 (0:00:00.602) 0:04:33.588 ****** 2026-01-01 00:57:27.709205 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.709210 | orchestrator | 2026-01-01 00:57:27.709216 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-01 00:57:27.709221 | orchestrator | Thursday 01 January 2026 00:50:08 +0000 (0:00:00.847) 0:04:34.436 ****** 2026-01-01 00:57:27.709226 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709231 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.709237 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.709242 | orchestrator | 2026-01-01 00:57:27.709247 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-01 00:57:27.709252 | orchestrator | Thursday 01 January 2026 00:50:09 +0000 (0:00:00.483) 0:04:34.919 ****** 2026-01-01 00:57:27.709258 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709263 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.709268 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.709274 | orchestrator | 2026-01-01 00:57:27.709279 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-01 00:57:27.709284 | orchestrator | Thursday 01 January 2026 00:50:09 +0000 (0:00:00.374) 0:04:35.294 ****** 2026-01-01 00:57:27.709289 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.709295 | orchestrator | 2026-01-01 00:57:27.709300 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-01 00:57:27.709305 | orchestrator | Thursday 01 January 2026 00:50:10 +0000 (0:00:00.860) 0:04:36.155 ****** 2026-01-01 00:57:27.709311 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.709316 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.709321 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.709326 | orchestrator | 2026-01-01 00:57:27.709332 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-01 00:57:27.709343 | orchestrator | Thursday 01 January 2026 00:50:13 +0000 (0:00:02.850) 0:04:39.006 ****** 2026-01-01 00:57:27.709348 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.709353 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.709359 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.709364 | orchestrator | 2026-01-01 00:57:27.709369 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-01 00:57:27.709374 | orchestrator | Thursday 01 January 2026 00:50:14 +0000 (0:00:01.338) 0:04:40.344 ****** 2026-01-01 00:57:27.709380 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.709385 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.709390 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.709395 | orchestrator | 2026-01-01 00:57:27.709401 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-01 00:57:27.709406 | orchestrator | Thursday 01 January 2026 00:50:18 +0000 (0:00:03.415) 0:04:43.759 ****** 2026-01-01 00:57:27.709411 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.709417 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.709422 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.709427 | orchestrator | 2026-01-01 00:57:27.709433 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-01 00:57:27.709438 | orchestrator | Thursday 01 January 2026 00:50:21 +0000 (0:00:03.459) 0:04:47.219 ****** 2026-01-01 00:57:27.709443 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.709448 | orchestrator | 2026-01-01 00:57:27.709457 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-01 00:57:27.709462 | orchestrator | Thursday 01 January 2026 00:50:22 +0000 (0:00:01.020) 0:04:48.240 ****** 2026-01-01 00:57:27.709468 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-01 00:57:27.709473 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.709478 | orchestrator | 2026-01-01 00:57:27.709484 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-01 00:57:27.709489 | orchestrator | Thursday 01 January 2026 00:50:44 +0000 (0:00:22.243) 0:05:10.483 ****** 2026-01-01 00:57:27.709494 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.709499 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.709505 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.709510 | orchestrator | 2026-01-01 00:57:27.709515 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-01 00:57:27.709521 | orchestrator | Thursday 01 January 2026 00:50:55 +0000 (0:00:10.185) 0:05:20.668 ****** 2026-01-01 00:57:27.709526 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709531 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.709536 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.709541 | orchestrator | 2026-01-01 00:57:27.709547 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-01 00:57:27.709572 | orchestrator | Thursday 01 January 2026 00:50:55 +0000 (0:00:00.814) 0:05:21.483 ****** 2026-01-01 00:57:27.709579 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-01 00:57:27.709603 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-01 00:57:27.709612 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-01 00:57:27.709628 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-01 00:57:27.709637 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-01 00:57:27.709646 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__77218c4cb3b7cd151dc826394d5c9ec1f31f6d4a'}])  2026-01-01 00:57:27.709653 | orchestrator | 2026-01-01 00:57:27.709658 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 00:57:27.709664 | orchestrator | Thursday 01 January 2026 00:51:11 +0000 (0:00:15.497) 0:05:36.981 ****** 2026-01-01 00:57:27.709669 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709674 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.709680 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.709685 | orchestrator | 2026-01-01 00:57:27.709690 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-01 00:57:27.709696 | orchestrator | Thursday 01 January 2026 00:51:11 +0000 (0:00:00.345) 0:05:37.326 ****** 2026-01-01 00:57:27.709701 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.709706 | orchestrator | 2026-01-01 00:57:27.709711 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-01 00:57:27.709717 | orchestrator | Thursday 01 January 2026 00:51:12 +0000 (0:00:00.873) 0:05:38.200 ****** 2026-01-01 00:57:27.709722 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.709731 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.709737 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.709742 | orchestrator | 2026-01-01 00:57:27.709747 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-01 00:57:27.709753 | orchestrator | Thursday 01 January 2026 00:51:13 +0000 (0:00:00.437) 0:05:38.638 ****** 2026-01-01 00:57:27.709758 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709763 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.709769 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.709774 | orchestrator | 2026-01-01 00:57:27.709779 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-01 00:57:27.709785 | orchestrator | Thursday 01 January 2026 00:51:13 +0000 (0:00:00.415) 0:05:39.054 ****** 2026-01-01 00:57:27.709790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:57:27.709795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:57:27.709800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:57:27.709806 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.709811 | orchestrator | 2026-01-01 00:57:27.709816 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-01 00:57:27.709825 | orchestrator | Thursday 01 January 2026 00:51:14 +0000 (0:00:01.013) 0:05:40.068 ****** 2026-01-01 00:57:27.709830 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.709839 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.709874 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.709885 | orchestrator | 2026-01-01 00:57:27.709895 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-01 00:57:27.709905 | orchestrator | 2026-01-01 00:57:27.709910 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.709915 | orchestrator | Thursday 01 January 2026 00:51:15 +0000 (0:00:01.010) 0:05:41.078 ****** 2026-01-01 00:57:27.709921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.709926 | orchestrator | 2026-01-01 00:57:27.709932 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.709937 | orchestrator | Thursday 01 January 2026 00:51:16 +0000 (0:00:00.666) 0:05:41.745 ****** 2026-01-01 00:57:27.709942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.709947 | orchestrator | 2026-01-01 00:57:27.709953 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.709958 | orchestrator | Thursday 01 January 2026 00:51:17 +0000 (0:00:00.874) 0:05:42.620 ****** 2026-01-01 00:57:27.709963 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.709969 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.709974 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.709979 | orchestrator | 2026-01-01 00:57:27.709984 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.709990 | orchestrator | Thursday 01 January 2026 00:51:17 +0000 (0:00:00.857) 0:05:43.477 ****** 2026-01-01 00:57:27.709995 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710000 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710005 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710011 | orchestrator | 2026-01-01 00:57:27.710036 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.710042 | orchestrator | Thursday 01 January 2026 00:51:18 +0000 (0:00:00.331) 0:05:43.808 ****** 2026-01-01 00:57:27.710047 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710053 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710058 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710063 | orchestrator | 2026-01-01 00:57:27.710068 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.710074 | orchestrator | Thursday 01 January 2026 00:51:18 +0000 (0:00:00.594) 0:05:44.402 ****** 2026-01-01 00:57:27.710079 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710085 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710090 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710095 | orchestrator | 2026-01-01 00:57:27.710100 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.710106 | orchestrator | Thursday 01 January 2026 00:51:19 +0000 (0:00:00.347) 0:05:44.750 ****** 2026-01-01 00:57:27.710111 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710116 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710122 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710127 | orchestrator | 2026-01-01 00:57:27.710132 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.710138 | orchestrator | Thursday 01 January 2026 00:51:19 +0000 (0:00:00.754) 0:05:45.504 ****** 2026-01-01 00:57:27.710143 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710148 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710154 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710159 | orchestrator | 2026-01-01 00:57:27.710164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.710170 | orchestrator | Thursday 01 January 2026 00:51:20 +0000 (0:00:00.359) 0:05:45.863 ****** 2026-01-01 00:57:27.710180 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710185 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710191 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710196 | orchestrator | 2026-01-01 00:57:27.710201 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.710206 | orchestrator | Thursday 01 January 2026 00:51:20 +0000 (0:00:00.640) 0:05:46.503 ****** 2026-01-01 00:57:27.710212 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710217 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710222 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710228 | orchestrator | 2026-01-01 00:57:27.710233 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.710238 | orchestrator | Thursday 01 January 2026 00:51:21 +0000 (0:00:00.792) 0:05:47.296 ****** 2026-01-01 00:57:27.710244 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710249 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710254 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710260 | orchestrator | 2026-01-01 00:57:27.710265 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.710270 | orchestrator | Thursday 01 January 2026 00:51:22 +0000 (0:00:00.690) 0:05:47.987 ****** 2026-01-01 00:57:27.710276 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710281 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710286 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710292 | orchestrator | 2026-01-01 00:57:27.710297 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.710302 | orchestrator | Thursday 01 January 2026 00:51:22 +0000 (0:00:00.319) 0:05:48.307 ****** 2026-01-01 00:57:27.710308 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710313 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710318 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710324 | orchestrator | 2026-01-01 00:57:27.710329 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.710334 | orchestrator | Thursday 01 January 2026 00:51:23 +0000 (0:00:00.611) 0:05:48.918 ****** 2026-01-01 00:57:27.710340 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710345 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710350 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710356 | orchestrator | 2026-01-01 00:57:27.710361 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.710385 | orchestrator | Thursday 01 January 2026 00:51:23 +0000 (0:00:00.339) 0:05:49.258 ****** 2026-01-01 00:57:27.710391 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710397 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710402 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710407 | orchestrator | 2026-01-01 00:57:27.710413 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.710418 | orchestrator | Thursday 01 January 2026 00:51:23 +0000 (0:00:00.343) 0:05:49.601 ****** 2026-01-01 00:57:27.710423 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710429 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710434 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710439 | orchestrator | 2026-01-01 00:57:27.710445 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.710450 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:00.301) 0:05:49.903 ****** 2026-01-01 00:57:27.710455 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710461 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710466 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710471 | orchestrator | 2026-01-01 00:57:27.710477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.710482 | orchestrator | Thursday 01 January 2026 00:51:24 +0000 (0:00:00.330) 0:05:50.233 ****** 2026-01-01 00:57:27.710491 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710496 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710502 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710507 | orchestrator | 2026-01-01 00:57:27.710512 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.710518 | orchestrator | Thursday 01 January 2026 00:51:25 +0000 (0:00:00.712) 0:05:50.946 ****** 2026-01-01 00:57:27.710523 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710528 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710534 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710539 | orchestrator | 2026-01-01 00:57:27.710544 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.710550 | orchestrator | Thursday 01 January 2026 00:51:25 +0000 (0:00:00.403) 0:05:51.350 ****** 2026-01-01 00:57:27.710555 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710560 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710566 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710571 | orchestrator | 2026-01-01 00:57:27.710576 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.710615 | orchestrator | Thursday 01 January 2026 00:51:26 +0000 (0:00:00.399) 0:05:51.749 ****** 2026-01-01 00:57:27.710622 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710628 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710633 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710639 | orchestrator | 2026-01-01 00:57:27.710644 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-01 00:57:27.710649 | orchestrator | Thursday 01 January 2026 00:51:26 +0000 (0:00:00.851) 0:05:52.601 ****** 2026-01-01 00:57:27.710655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 00:57:27.710660 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:57:27.710666 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:57:27.710671 | orchestrator | 2026-01-01 00:57:27.710676 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-01 00:57:27.710682 | orchestrator | Thursday 01 January 2026 00:51:28 +0000 (0:00:01.068) 0:05:53.669 ****** 2026-01-01 00:57:27.710687 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.710693 | orchestrator | 2026-01-01 00:57:27.710698 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-01 00:57:27.710704 | orchestrator | Thursday 01 January 2026 00:51:28 +0000 (0:00:00.642) 0:05:54.311 ****** 2026-01-01 00:57:27.710709 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.710714 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.710720 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.710725 | orchestrator | 2026-01-01 00:57:27.710730 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-01 00:57:27.710736 | orchestrator | Thursday 01 January 2026 00:51:29 +0000 (0:00:00.813) 0:05:55.125 ****** 2026-01-01 00:57:27.710741 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.710796 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.710809 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.710814 | orchestrator | 2026-01-01 00:57:27.710820 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-01 00:57:27.710825 | orchestrator | Thursday 01 January 2026 00:51:30 +0000 (0:00:00.728) 0:05:55.853 ****** 2026-01-01 00:57:27.710833 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:57:27.710839 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:57:27.710844 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:57:27.710850 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-01 00:57:27.710855 | orchestrator | 2026-01-01 00:57:27.710860 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-01 00:57:27.710870 | orchestrator | Thursday 01 January 2026 00:51:40 +0000 (0:00:10.163) 0:06:06.017 ****** 2026-01-01 00:57:27.710875 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.710881 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.710886 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.710892 | orchestrator | 2026-01-01 00:57:27.710897 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-01 00:57:27.710902 | orchestrator | Thursday 01 January 2026 00:51:40 +0000 (0:00:00.461) 0:06:06.478 ****** 2026-01-01 00:57:27.710908 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-01 00:57:27.710913 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-01 00:57:27.710918 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-01 00:57:27.710924 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.710929 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-01 00:57:27.710957 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.710964 | orchestrator | 2026-01-01 00:57:27.710969 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-01 00:57:27.710974 | orchestrator | Thursday 01 January 2026 00:51:43 +0000 (0:00:02.298) 0:06:08.777 ****** 2026-01-01 00:57:27.710980 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-01 00:57:27.710985 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-01 00:57:27.710990 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-01 00:57:27.710996 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-01 00:57:27.711001 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-01 00:57:27.711007 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-01 00:57:27.711012 | orchestrator | 2026-01-01 00:57:27.711017 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-01 00:57:27.711023 | orchestrator | Thursday 01 January 2026 00:51:44 +0000 (0:00:01.262) 0:06:10.040 ****** 2026-01-01 00:57:27.711028 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.711038 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.711046 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.711055 | orchestrator | 2026-01-01 00:57:27.711064 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-01 00:57:27.711073 | orchestrator | Thursday 01 January 2026 00:51:45 +0000 (0:00:01.138) 0:06:11.178 ****** 2026-01-01 00:57:27.711082 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.711089 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.711094 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.711098 | orchestrator | 2026-01-01 00:57:27.711103 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-01 00:57:27.711108 | orchestrator | Thursday 01 January 2026 00:51:45 +0000 (0:00:00.362) 0:06:11.541 ****** 2026-01-01 00:57:27.711112 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.711117 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.711122 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.711126 | orchestrator | 2026-01-01 00:57:27.711131 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-01 00:57:27.711136 | orchestrator | Thursday 01 January 2026 00:51:46 +0000 (0:00:00.349) 0:06:11.891 ****** 2026-01-01 00:57:27.711141 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.711145 | orchestrator | 2026-01-01 00:57:27.711150 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-01 00:57:27.711155 | orchestrator | Thursday 01 January 2026 00:51:47 +0000 (0:00:00.873) 0:06:12.764 ****** 2026-01-01 00:57:27.711159 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.711164 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.711169 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.711173 | orchestrator | 2026-01-01 00:57:27.711178 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-01 00:57:27.711187 | orchestrator | Thursday 01 January 2026 00:51:47 +0000 (0:00:00.493) 0:06:13.257 ****** 2026-01-01 00:57:27.711191 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.711196 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.711201 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.711205 | orchestrator | 2026-01-01 00:57:27.711210 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-01 00:57:27.711215 | orchestrator | Thursday 01 January 2026 00:51:48 +0000 (0:00:00.447) 0:06:13.705 ****** 2026-01-01 00:57:27.711220 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.711225 | orchestrator | 2026-01-01 00:57:27.711229 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-01 00:57:27.711234 | orchestrator | Thursday 01 January 2026 00:51:48 +0000 (0:00:00.864) 0:06:14.570 ****** 2026-01-01 00:57:27.711239 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.711244 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.711248 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.711253 | orchestrator | 2026-01-01 00:57:27.711258 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-01 00:57:27.711262 | orchestrator | Thursday 01 January 2026 00:51:50 +0000 (0:00:01.287) 0:06:15.857 ****** 2026-01-01 00:57:27.711267 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.711272 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.711277 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.711281 | orchestrator | 2026-01-01 00:57:27.711286 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-01 00:57:27.711294 | orchestrator | Thursday 01 January 2026 00:51:51 +0000 (0:00:01.121) 0:06:16.978 ****** 2026-01-01 00:57:27.711299 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.711303 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.711308 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.711313 | orchestrator | 2026-01-01 00:57:27.711317 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-01 00:57:27.711322 | orchestrator | Thursday 01 January 2026 00:51:53 +0000 (0:00:01.898) 0:06:18.877 ****** 2026-01-01 00:57:27.711327 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.711332 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.711336 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.711341 | orchestrator | 2026-01-01 00:57:27.711346 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-01 00:57:27.711350 | orchestrator | Thursday 01 January 2026 00:51:55 +0000 (0:00:02.369) 0:06:21.246 ****** 2026-01-01 00:57:27.711355 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.711360 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.711364 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-01 00:57:27.711369 | orchestrator | 2026-01-01 00:57:27.711375 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-01 00:57:27.711383 | orchestrator | Thursday 01 January 2026 00:51:56 +0000 (0:00:00.499) 0:06:21.746 ****** 2026-01-01 00:57:27.711416 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-01 00:57:27.711426 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-01 00:57:27.711434 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-01 00:57:27.711442 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-01 00:57:27.711450 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-01 00:57:27.711458 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-01 00:57:27.711471 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.711480 | orchestrator | 2026-01-01 00:57:27.711488 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-01 00:57:27.711497 | orchestrator | Thursday 01 January 2026 00:52:32 +0000 (0:00:36.161) 0:06:57.907 ****** 2026-01-01 00:57:27.711505 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.711513 | orchestrator | 2026-01-01 00:57:27.711518 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-01 00:57:27.711523 | orchestrator | Thursday 01 January 2026 00:52:33 +0000 (0:00:01.312) 0:06:59.220 ****** 2026-01-01 00:57:27.711527 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.711532 | orchestrator | 2026-01-01 00:57:27.711537 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-01 00:57:27.711542 | orchestrator | Thursday 01 January 2026 00:52:33 +0000 (0:00:00.342) 0:06:59.563 ****** 2026-01-01 00:57:27.711546 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.711551 | orchestrator | 2026-01-01 00:57:27.711556 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-01 00:57:27.711560 | orchestrator | Thursday 01 January 2026 00:52:34 +0000 (0:00:00.191) 0:06:59.754 ****** 2026-01-01 00:57:27.711565 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-01 00:57:27.711570 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-01 00:57:27.711575 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-01 00:57:27.711579 | orchestrator | 2026-01-01 00:57:27.711599 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-01 00:57:27.711604 | orchestrator | Thursday 01 January 2026 00:52:40 +0000 (0:00:06.854) 0:07:06.609 ****** 2026-01-01 00:57:27.711609 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-01 00:57:27.711614 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-01 00:57:27.711619 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-01 00:57:27.711624 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-01 00:57:27.711628 | orchestrator | 2026-01-01 00:57:27.711633 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 00:57:27.711638 | orchestrator | Thursday 01 January 2026 00:52:46 +0000 (0:00:05.538) 0:07:12.147 ****** 2026-01-01 00:57:27.711643 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.711648 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.711652 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.711657 | orchestrator | 2026-01-01 00:57:27.711662 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-01 00:57:27.711667 | orchestrator | Thursday 01 January 2026 00:52:47 +0000 (0:00:00.799) 0:07:12.947 ****** 2026-01-01 00:57:27.711672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.711677 | orchestrator | 2026-01-01 00:57:27.711682 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-01 00:57:27.711686 | orchestrator | Thursday 01 January 2026 00:52:48 +0000 (0:00:00.838) 0:07:13.785 ****** 2026-01-01 00:57:27.711691 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.711696 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.711701 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.711706 | orchestrator | 2026-01-01 00:57:27.711710 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-01 00:57:27.711718 | orchestrator | Thursday 01 January 2026 00:52:48 +0000 (0:00:00.364) 0:07:14.150 ****** 2026-01-01 00:57:27.711723 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.711728 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.711733 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.711741 | orchestrator | 2026-01-01 00:57:27.711749 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-01 00:57:27.711758 | orchestrator | Thursday 01 January 2026 00:52:49 +0000 (0:00:01.279) 0:07:15.430 ****** 2026-01-01 00:57:27.711766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-01 00:57:27.711775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-01 00:57:27.711783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-01 00:57:27.711791 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.711800 | orchestrator | 2026-01-01 00:57:27.711808 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-01 00:57:27.711815 | orchestrator | Thursday 01 January 2026 00:52:50 +0000 (0:00:00.618) 0:07:16.048 ****** 2026-01-01 00:57:27.711820 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.711825 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.711830 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.711834 | orchestrator | 2026-01-01 00:57:27.711839 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-01 00:57:27.711844 | orchestrator | 2026-01-01 00:57:27.711848 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.711873 | orchestrator | Thursday 01 January 2026 00:52:51 +0000 (0:00:00.919) 0:07:16.968 ****** 2026-01-01 00:57:27.711879 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-01-01 00:57:27.711884 | orchestrator | 2026-01-01 00:57:27.711888 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.711893 | orchestrator | Thursday 01 January 2026 00:52:51 +0000 (0:00:00.561) 0:07:17.530 ****** 2026-01-01 00:57:27.711898 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.711903 | orchestrator | 2026-01-01 00:57:27.711907 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.711912 | orchestrator | Thursday 01 January 2026 00:52:52 +0000 (0:00:00.860) 0:07:18.390 ****** 2026-01-01 00:57:27.711917 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.711922 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.711926 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.711931 | orchestrator | 2026-01-01 00:57:27.711936 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.711940 | orchestrator | Thursday 01 January 2026 00:52:53 +0000 (0:00:00.328) 0:07:18.719 ****** 2026-01-01 00:57:27.711945 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.711950 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.711955 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.711960 | orchestrator | 2026-01-01 00:57:27.711964 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.711969 | orchestrator | Thursday 01 January 2026 00:52:53 +0000 (0:00:00.713) 0:07:19.433 ****** 2026-01-01 00:57:27.711974 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.711979 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.711983 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.711988 | orchestrator | 2026-01-01 00:57:27.711993 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.711998 | orchestrator | Thursday 01 January 2026 00:52:54 +0000 (0:00:00.773) 0:07:20.206 ****** 2026-01-01 00:57:27.712002 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712007 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712012 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712017 | orchestrator | 2026-01-01 00:57:27.712021 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.712026 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:01.493) 0:07:21.700 ****** 2026-01-01 00:57:27.712031 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712036 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712045 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712049 | orchestrator | 2026-01-01 00:57:27.712054 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.712059 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:00.386) 0:07:22.086 ****** 2026-01-01 00:57:27.712064 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712068 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712073 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712078 | orchestrator | 2026-01-01 00:57:27.712083 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.712087 | orchestrator | Thursday 01 January 2026 00:52:56 +0000 (0:00:00.318) 0:07:22.404 ****** 2026-01-01 00:57:27.712092 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712097 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712102 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712106 | orchestrator | 2026-01-01 00:57:27.712111 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.712116 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:00.268) 0:07:22.673 ****** 2026-01-01 00:57:27.712121 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712125 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712130 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712135 | orchestrator | 2026-01-01 00:57:27.712140 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.712144 | orchestrator | Thursday 01 January 2026 00:52:57 +0000 (0:00:00.838) 0:07:23.512 ****** 2026-01-01 00:57:27.712149 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712154 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712159 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712163 | orchestrator | 2026-01-01 00:57:27.712168 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.712173 | orchestrator | Thursday 01 January 2026 00:52:58 +0000 (0:00:00.744) 0:07:24.256 ****** 2026-01-01 00:57:27.712178 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712183 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712190 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712195 | orchestrator | 2026-01-01 00:57:27.712200 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.712205 | orchestrator | Thursday 01 January 2026 00:52:58 +0000 (0:00:00.295) 0:07:24.552 ****** 2026-01-01 00:57:27.712209 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712214 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712219 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712224 | orchestrator | 2026-01-01 00:57:27.712229 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.712233 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.264) 0:07:24.816 ****** 2026-01-01 00:57:27.712238 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712243 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712248 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712252 | orchestrator | 2026-01-01 00:57:27.712257 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.712262 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.482) 0:07:25.299 ****** 2026-01-01 00:57:27.712267 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712271 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712276 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712281 | orchestrator | 2026-01-01 00:57:27.712286 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.712292 | orchestrator | Thursday 01 January 2026 00:52:59 +0000 (0:00:00.288) 0:07:25.587 ****** 2026-01-01 00:57:27.712297 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712302 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712307 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712312 | orchestrator | 2026-01-01 00:57:27.712319 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.712324 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.295) 0:07:25.883 ****** 2026-01-01 00:57:27.712329 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712334 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712338 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712343 | orchestrator | 2026-01-01 00:57:27.712348 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.712353 | orchestrator | Thursday 01 January 2026 00:53:00 +0000 (0:00:00.308) 0:07:26.191 ****** 2026-01-01 00:57:27.712357 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712362 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712367 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712372 | orchestrator | 2026-01-01 00:57:27.712376 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.712381 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.460) 0:07:26.652 ****** 2026-01-01 00:57:27.712386 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712391 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712395 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712400 | orchestrator | 2026-01-01 00:57:27.712405 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.712410 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.281) 0:07:26.933 ****** 2026-01-01 00:57:27.712415 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712419 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712424 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712429 | orchestrator | 2026-01-01 00:57:27.712434 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.712438 | orchestrator | Thursday 01 January 2026 00:53:01 +0000 (0:00:00.308) 0:07:27.241 ****** 2026-01-01 00:57:27.712443 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712448 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712453 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712458 | orchestrator | 2026-01-01 00:57:27.712462 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-01 00:57:27.712467 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:00.703) 0:07:27.945 ****** 2026-01-01 00:57:27.712472 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712477 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712481 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712486 | orchestrator | 2026-01-01 00:57:27.712491 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-01 00:57:27.712496 | orchestrator | Thursday 01 January 2026 00:53:02 +0000 (0:00:00.312) 0:07:28.257 ****** 2026-01-01 00:57:27.712501 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:57:27.712505 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:57:27.712510 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:57:27.712515 | orchestrator | 2026-01-01 00:57:27.712520 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-01 00:57:27.712524 | orchestrator | Thursday 01 January 2026 00:53:03 +0000 (0:00:00.554) 0:07:28.812 ****** 2026-01-01 00:57:27.712529 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-01-01 00:57:27.712534 | orchestrator | 2026-01-01 00:57:27.712539 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-01 00:57:27.712543 | orchestrator | Thursday 01 January 2026 00:53:03 +0000 (0:00:00.451) 0:07:29.263 ****** 2026-01-01 00:57:27.712548 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712553 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712558 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712566 | orchestrator | 2026-01-01 00:57:27.712571 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-01 00:57:27.712575 | orchestrator | Thursday 01 January 2026 00:53:04 +0000 (0:00:00.507) 0:07:29.770 ****** 2026-01-01 00:57:27.712591 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712597 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712602 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712607 | orchestrator | 2026-01-01 00:57:27.712612 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-01 00:57:27.712619 | orchestrator | Thursday 01 January 2026 00:53:04 +0000 (0:00:00.365) 0:07:30.136 ****** 2026-01-01 00:57:27.712624 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712628 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712633 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712638 | orchestrator | 2026-01-01 00:57:27.712642 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-01 00:57:27.712647 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:00.619) 0:07:30.756 ****** 2026-01-01 00:57:27.712652 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.712657 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.712661 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.712666 | orchestrator | 2026-01-01 00:57:27.712671 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-01 00:57:27.712675 | orchestrator | Thursday 01 January 2026 00:53:05 +0000 (0:00:00.367) 0:07:31.123 ****** 2026-01-01 00:57:27.712680 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-01 00:57:27.712685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-01 00:57:27.712690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-01 00:57:27.712700 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-01 00:57:27.712705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-01 00:57:27.712714 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-01 00:57:27.712722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-01 00:57:27.712730 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-01 00:57:27.712737 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-01 00:57:27.712745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-01 00:57:27.712752 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-01 00:57:27.712759 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-01 00:57:27.712767 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-01 00:57:27.712775 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-01 00:57:27.712784 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-01 00:57:27.712793 | orchestrator | 2026-01-01 00:57:27.712800 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-01 00:57:27.712807 | orchestrator | Thursday 01 January 2026 00:53:08 +0000 (0:00:03.105) 0:07:34.229 ****** 2026-01-01 00:57:27.712812 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.712817 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.712822 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.712826 | orchestrator | 2026-01-01 00:57:27.712831 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-01 00:57:27.712836 | orchestrator | Thursday 01 January 2026 00:53:08 +0000 (0:00:00.330) 0:07:34.559 ****** 2026-01-01 00:57:27.712845 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.712849 | orchestrator | 2026-01-01 00:57:27.712854 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-01 00:57:27.712859 | orchestrator | Thursday 01 January 2026 00:53:09 +0000 (0:00:00.567) 0:07:35.127 ****** 2026-01-01 00:57:27.712863 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-01 00:57:27.712868 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-01 00:57:27.712873 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-01 00:57:27.712878 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-01 00:57:27.712882 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-01 00:57:27.712887 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-01 00:57:27.712892 | orchestrator | 2026-01-01 00:57:27.712896 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-01 00:57:27.712901 | orchestrator | Thursday 01 January 2026 00:53:10 +0000 (0:00:01.426) 0:07:36.553 ****** 2026-01-01 00:57:27.712906 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.712910 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 00:57:27.712915 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:57:27.712920 | orchestrator | 2026-01-01 00:57:27.712925 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-01 00:57:27.712929 | orchestrator | Thursday 01 January 2026 00:53:13 +0000 (0:00:02.495) 0:07:39.048 ****** 2026-01-01 00:57:27.712934 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 00:57:27.712939 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 00:57:27.712944 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.712948 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 00:57:27.712953 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-01 00:57:27.712958 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.712962 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 00:57:27.712967 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-01 00:57:27.712972 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.712976 | orchestrator | 2026-01-01 00:57:27.712986 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-01 00:57:27.712991 | orchestrator | Thursday 01 January 2026 00:53:14 +0000 (0:00:01.460) 0:07:40.509 ****** 2026-01-01 00:57:27.712996 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.713001 | orchestrator | 2026-01-01 00:57:27.713005 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-01 00:57:27.713010 | orchestrator | Thursday 01 January 2026 00:53:17 +0000 (0:00:02.593) 0:07:43.102 ****** 2026-01-01 00:57:27.713015 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.713019 | orchestrator | 2026-01-01 00:57:27.713024 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-01 00:57:27.713029 | orchestrator | Thursday 01 January 2026 00:53:18 +0000 (0:00:00.649) 0:07:43.752 ****** 2026-01-01 00:57:27.713034 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-906f607d-f8ab-576d-9485-c345cfde3c80', 'data_vg': 'ceph-906f607d-f8ab-576d-9485-c345cfde3c80'}) 2026-01-01 00:57:27.713039 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-21a5f53a-dc04-53e0-afe9-de267ba79db4', 'data_vg': 'ceph-21a5f53a-dc04-53e0-afe9-de267ba79db4'}) 2026-01-01 00:57:27.713047 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4f4651f5-78d1-505d-b741-249c77d228e7', 'data_vg': 'ceph-4f4651f5-78d1-505d-b741-249c77d228e7'}) 2026-01-01 00:57:27.713052 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-27db58f4-0fe4-54a7-94bd-e6fe47c26f99', 'data_vg': 'ceph-27db58f4-0fe4-54a7-94bd-e6fe47c26f99'}) 2026-01-01 00:57:27.713060 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b87804f1-5161-5843-851c-861f025ab6ce', 'data_vg': 'ceph-b87804f1-5161-5843-851c-861f025ab6ce'}) 2026-01-01 00:57:27.713065 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e5dc050d-fe50-5167-b35b-32fd51d3d555', 'data_vg': 'ceph-e5dc050d-fe50-5167-b35b-32fd51d3d555'}) 2026-01-01 00:57:27.713070 | orchestrator | 2026-01-01 00:57:27.713075 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-01 00:57:27.713079 | orchestrator | Thursday 01 January 2026 00:53:57 +0000 (0:00:39.567) 0:08:23.319 ****** 2026-01-01 00:57:27.713084 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713089 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713093 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713098 | orchestrator | 2026-01-01 00:57:27.713103 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-01 00:57:27.713108 | orchestrator | Thursday 01 January 2026 00:53:58 +0000 (0:00:00.430) 0:08:23.750 ****** 2026-01-01 00:57:27.713112 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.713117 | orchestrator | 2026-01-01 00:57:27.713122 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-01 00:57:27.713126 | orchestrator | Thursday 01 January 2026 00:53:59 +0000 (0:00:00.937) 0:08:24.687 ****** 2026-01-01 00:57:27.713131 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.713136 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.713141 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.713145 | orchestrator | 2026-01-01 00:57:27.713150 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-01 00:57:27.713155 | orchestrator | Thursday 01 January 2026 00:53:59 +0000 (0:00:00.716) 0:08:25.404 ****** 2026-01-01 00:57:27.713159 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.713164 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.713169 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.713173 | orchestrator | 2026-01-01 00:57:27.713178 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-01 00:57:27.713183 | orchestrator | Thursday 01 January 2026 00:54:02 +0000 (0:00:02.686) 0:08:28.091 ****** 2026-01-01 00:57:27.713187 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.713192 | orchestrator | 2026-01-01 00:57:27.713197 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-01 00:57:27.713202 | orchestrator | Thursday 01 January 2026 00:54:03 +0000 (0:00:00.705) 0:08:28.796 ****** 2026-01-01 00:57:27.713206 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.713211 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.713216 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.713220 | orchestrator | 2026-01-01 00:57:27.713225 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-01 00:57:27.713230 | orchestrator | Thursday 01 January 2026 00:54:04 +0000 (0:00:01.194) 0:08:29.991 ****** 2026-01-01 00:57:27.713234 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.713239 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.713244 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.713248 | orchestrator | 2026-01-01 00:57:27.713253 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-01 00:57:27.713258 | orchestrator | Thursday 01 January 2026 00:54:05 +0000 (0:00:01.243) 0:08:31.235 ****** 2026-01-01 00:57:27.713262 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.713267 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.713272 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.713276 | orchestrator | 2026-01-01 00:57:27.713281 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-01 00:57:27.713289 | orchestrator | Thursday 01 January 2026 00:54:07 +0000 (0:00:01.819) 0:08:33.054 ****** 2026-01-01 00:57:27.713294 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713298 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713303 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713308 | orchestrator | 2026-01-01 00:57:27.713315 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-01 00:57:27.713319 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:00.621) 0:08:33.675 ****** 2026-01-01 00:57:27.713324 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713329 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713333 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713338 | orchestrator | 2026-01-01 00:57:27.713343 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-01 00:57:27.713347 | orchestrator | Thursday 01 January 2026 00:54:08 +0000 (0:00:00.354) 0:08:34.030 ****** 2026-01-01 00:57:27.713352 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 00:57:27.713357 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-01 00:57:27.713361 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-01 00:57:27.713366 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-01-01 00:57:27.713371 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-01-01 00:57:27.713375 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-01 00:57:27.713380 | orchestrator | 2026-01-01 00:57:27.713385 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-01 00:57:27.713389 | orchestrator | Thursday 01 January 2026 00:54:09 +0000 (0:00:01.101) 0:08:35.131 ****** 2026-01-01 00:57:27.713394 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-01 00:57:27.713399 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-01 00:57:27.713405 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-01 00:57:27.713411 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-01 00:57:27.713415 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-01 00:57:27.713420 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-01 00:57:27.713424 | orchestrator | 2026-01-01 00:57:27.713429 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-01 00:57:27.713434 | orchestrator | Thursday 01 January 2026 00:54:11 +0000 (0:00:02.287) 0:08:37.419 ****** 2026-01-01 00:57:27.713439 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-01 00:57:27.713443 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-01 00:57:27.713448 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-01 00:57:27.713453 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-01-01 00:57:27.713457 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-01-01 00:57:27.713462 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-01 00:57:27.713467 | orchestrator | 2026-01-01 00:57:27.713471 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-01 00:57:27.713476 | orchestrator | Thursday 01 January 2026 00:54:15 +0000 (0:00:03.982) 0:08:41.402 ****** 2026-01-01 00:57:27.713481 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713486 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713490 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.713495 | orchestrator | 2026-01-01 00:57:27.713500 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-01 00:57:27.713504 | orchestrator | Thursday 01 January 2026 00:54:18 +0000 (0:00:02.383) 0:08:43.786 ****** 2026-01-01 00:57:27.713509 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713514 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713519 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-01 00:57:27.713523 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.713528 | orchestrator | 2026-01-01 00:57:27.713533 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-01 00:57:27.713543 | orchestrator | Thursday 01 January 2026 00:54:30 +0000 (0:00:12.573) 0:08:56.359 ****** 2026-01-01 00:57:27.713547 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713552 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713557 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713561 | orchestrator | 2026-01-01 00:57:27.713566 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 00:57:27.713571 | orchestrator | Thursday 01 January 2026 00:54:31 +0000 (0:00:01.221) 0:08:57.580 ****** 2026-01-01 00:57:27.713575 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713590 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713600 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713608 | orchestrator | 2026-01-01 00:57:27.713617 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-01 00:57:27.713625 | orchestrator | Thursday 01 January 2026 00:54:32 +0000 (0:00:00.404) 0:08:57.985 ****** 2026-01-01 00:57:27.713633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.713640 | orchestrator | 2026-01-01 00:57:27.713649 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-01 00:57:27.713657 | orchestrator | Thursday 01 January 2026 00:54:33 +0000 (0:00:00.669) 0:08:58.654 ****** 2026-01-01 00:57:27.713665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.713672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.713676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.713681 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713688 | orchestrator | 2026-01-01 00:57:27.713697 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-01 00:57:27.713705 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:01.054) 0:08:59.709 ****** 2026-01-01 00:57:27.713711 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713716 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713721 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713725 | orchestrator | 2026-01-01 00:57:27.713730 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-01 00:57:27.713735 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.464) 0:09:00.174 ****** 2026-01-01 00:57:27.713740 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713744 | orchestrator | 2026-01-01 00:57:27.713749 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-01 00:57:27.713757 | orchestrator | Thursday 01 January 2026 00:54:34 +0000 (0:00:00.259) 0:09:00.434 ****** 2026-01-01 00:57:27.713761 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713766 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713771 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713776 | orchestrator | 2026-01-01 00:57:27.713780 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-01 00:57:27.713785 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.327) 0:09:00.761 ****** 2026-01-01 00:57:27.713790 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713795 | orchestrator | 2026-01-01 00:57:27.713799 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-01 00:57:27.713804 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.243) 0:09:01.005 ****** 2026-01-01 00:57:27.713809 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713817 | orchestrator | 2026-01-01 00:57:27.713822 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-01 00:57:27.713827 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.258) 0:09:01.264 ****** 2026-01-01 00:57:27.713831 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713836 | orchestrator | 2026-01-01 00:57:27.713841 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-01 00:57:27.713850 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.120) 0:09:01.385 ****** 2026-01-01 00:57:27.713858 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713863 | orchestrator | 2026-01-01 00:57:27.713868 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-01 00:57:27.713873 | orchestrator | Thursday 01 January 2026 00:54:35 +0000 (0:00:00.204) 0:09:01.589 ****** 2026-01-01 00:57:27.713877 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713882 | orchestrator | 2026-01-01 00:57:27.713887 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-01 00:57:27.713892 | orchestrator | Thursday 01 January 2026 00:54:36 +0000 (0:00:00.865) 0:09:02.454 ****** 2026-01-01 00:57:27.713896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.713901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.713906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.713910 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713915 | orchestrator | 2026-01-01 00:57:27.713920 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-01 00:57:27.713924 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.502) 0:09:02.957 ****** 2026-01-01 00:57:27.713929 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713934 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.713938 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.713943 | orchestrator | 2026-01-01 00:57:27.713948 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-01 00:57:27.713952 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.381) 0:09:03.338 ****** 2026-01-01 00:57:27.713957 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713962 | orchestrator | 2026-01-01 00:57:27.713966 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-01 00:57:27.713971 | orchestrator | Thursday 01 January 2026 00:54:37 +0000 (0:00:00.272) 0:09:03.611 ****** 2026-01-01 00:57:27.713976 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.713981 | orchestrator | 2026-01-01 00:57:27.713985 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-01 00:57:27.713990 | orchestrator | 2026-01-01 00:57:27.713995 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.713999 | orchestrator | Thursday 01 January 2026 00:54:38 +0000 (0:00:00.941) 0:09:04.553 ****** 2026-01-01 00:57:27.714004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.714009 | orchestrator | 2026-01-01 00:57:27.714041 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.714047 | orchestrator | Thursday 01 January 2026 00:54:40 +0000 (0:00:01.265) 0:09:05.818 ****** 2026-01-01 00:57:27.714052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.714057 | orchestrator | 2026-01-01 00:57:27.714062 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.714067 | orchestrator | Thursday 01 January 2026 00:54:41 +0000 (0:00:01.146) 0:09:06.964 ****** 2026-01-01 00:57:27.714071 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714076 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714081 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714086 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714090 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714095 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714100 | orchestrator | 2026-01-01 00:57:27.714105 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.714109 | orchestrator | Thursday 01 January 2026 00:54:42 +0000 (0:00:01.495) 0:09:08.460 ****** 2026-01-01 00:57:27.714118 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714123 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714127 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714132 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714137 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714142 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714146 | orchestrator | 2026-01-01 00:57:27.714151 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.714156 | orchestrator | Thursday 01 January 2026 00:54:43 +0000 (0:00:00.767) 0:09:09.227 ****** 2026-01-01 00:57:27.714161 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714165 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714170 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714175 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714182 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714187 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714192 | orchestrator | 2026-01-01 00:57:27.714196 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.714201 | orchestrator | Thursday 01 January 2026 00:54:44 +0000 (0:00:01.091) 0:09:10.319 ****** 2026-01-01 00:57:27.714206 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714211 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714215 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714220 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714225 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714230 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714234 | orchestrator | 2026-01-01 00:57:27.714239 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.714244 | orchestrator | Thursday 01 January 2026 00:54:45 +0000 (0:00:00.802) 0:09:11.122 ****** 2026-01-01 00:57:27.714249 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714253 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714258 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714263 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714268 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714272 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714277 | orchestrator | 2026-01-01 00:57:27.714282 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.714289 | orchestrator | Thursday 01 January 2026 00:54:46 +0000 (0:00:01.405) 0:09:12.527 ****** 2026-01-01 00:57:27.714294 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714299 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714304 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714309 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714313 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714318 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714323 | orchestrator | 2026-01-01 00:57:27.714327 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.714332 | orchestrator | Thursday 01 January 2026 00:54:47 +0000 (0:00:00.668) 0:09:13.195 ****** 2026-01-01 00:57:27.714337 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714342 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714346 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714351 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714356 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714360 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714365 | orchestrator | 2026-01-01 00:57:27.714370 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.714375 | orchestrator | Thursday 01 January 2026 00:54:48 +0000 (0:00:01.054) 0:09:14.249 ****** 2026-01-01 00:57:27.714379 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714384 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714389 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714394 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714401 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714406 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714411 | orchestrator | 2026-01-01 00:57:27.714416 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.714420 | orchestrator | Thursday 01 January 2026 00:54:49 +0000 (0:00:01.038) 0:09:15.288 ****** 2026-01-01 00:57:27.714425 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714430 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714434 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714439 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714444 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714448 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714453 | orchestrator | 2026-01-01 00:57:27.714458 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.714463 | orchestrator | Thursday 01 January 2026 00:54:51 +0000 (0:00:01.481) 0:09:16.769 ****** 2026-01-01 00:57:27.714468 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714472 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714477 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714482 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714486 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714491 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714496 | orchestrator | 2026-01-01 00:57:27.714501 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.714505 | orchestrator | Thursday 01 January 2026 00:54:51 +0000 (0:00:00.726) 0:09:17.495 ****** 2026-01-01 00:57:27.714510 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714515 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714520 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714524 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714529 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714534 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714538 | orchestrator | 2026-01-01 00:57:27.714543 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.714548 | orchestrator | Thursday 01 January 2026 00:54:52 +0000 (0:00:01.099) 0:09:18.594 ****** 2026-01-01 00:57:27.714553 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714557 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714562 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714567 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714571 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714576 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714609 | orchestrator | 2026-01-01 00:57:27.714615 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.714620 | orchestrator | Thursday 01 January 2026 00:54:53 +0000 (0:00:00.730) 0:09:19.325 ****** 2026-01-01 00:57:27.714624 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714629 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714634 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714639 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714643 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714648 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714653 | orchestrator | 2026-01-01 00:57:27.714657 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.714662 | orchestrator | Thursday 01 January 2026 00:54:54 +0000 (0:00:01.010) 0:09:20.336 ****** 2026-01-01 00:57:27.714667 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714672 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714676 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714684 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714689 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714693 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714698 | orchestrator | 2026-01-01 00:57:27.714703 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.714711 | orchestrator | Thursday 01 January 2026 00:54:55 +0000 (0:00:00.660) 0:09:20.996 ****** 2026-01-01 00:57:27.714715 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714720 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714725 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714729 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714734 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714739 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714743 | orchestrator | 2026-01-01 00:57:27.714748 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.714753 | orchestrator | Thursday 01 January 2026 00:54:56 +0000 (0:00:00.966) 0:09:21.962 ****** 2026-01-01 00:57:27.714758 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714762 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714767 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714772 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:57:27.714776 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:57:27.714781 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:57:27.714786 | orchestrator | 2026-01-01 00:57:27.714791 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.714798 | orchestrator | Thursday 01 January 2026 00:54:57 +0000 (0:00:00.685) 0:09:22.647 ****** 2026-01-01 00:57:27.714803 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.714808 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.714813 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.714818 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714822 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714827 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714832 | orchestrator | 2026-01-01 00:57:27.714837 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.714842 | orchestrator | Thursday 01 January 2026 00:54:57 +0000 (0:00:00.901) 0:09:23.549 ****** 2026-01-01 00:57:27.714846 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714851 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714856 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714860 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714865 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714870 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714874 | orchestrator | 2026-01-01 00:57:27.714879 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.714884 | orchestrator | Thursday 01 January 2026 00:54:58 +0000 (0:00:00.681) 0:09:24.231 ****** 2026-01-01 00:57:27.714889 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.714893 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.714898 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.714903 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714907 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.714912 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.714917 | orchestrator | 2026-01-01 00:57:27.714922 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-01 00:57:27.714926 | orchestrator | Thursday 01 January 2026 00:55:00 +0000 (0:00:01.406) 0:09:25.638 ****** 2026-01-01 00:57:27.714931 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.714936 | orchestrator | 2026-01-01 00:57:27.714941 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-01 00:57:27.714945 | orchestrator | Thursday 01 January 2026 00:55:04 +0000 (0:00:04.178) 0:09:29.816 ****** 2026-01-01 00:57:27.714950 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.714955 | orchestrator | 2026-01-01 00:57:27.714960 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-01 00:57:27.714964 | orchestrator | Thursday 01 January 2026 00:55:06 +0000 (0:00:02.129) 0:09:31.946 ****** 2026-01-01 00:57:27.714969 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.714974 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.714981 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.714986 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.714991 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.714996 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.715000 | orchestrator | 2026-01-01 00:57:27.715005 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-01 00:57:27.715010 | orchestrator | Thursday 01 January 2026 00:55:08 +0000 (0:00:01.907) 0:09:33.853 ****** 2026-01-01 00:57:27.715015 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.715019 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.715024 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.715029 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.715037 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.715045 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.715053 | orchestrator | 2026-01-01 00:57:27.715062 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-01 00:57:27.715071 | orchestrator | Thursday 01 January 2026 00:55:09 +0000 (0:00:01.152) 0:09:35.005 ****** 2026-01-01 00:57:27.715077 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.715083 | orchestrator | 2026-01-01 00:57:27.715088 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-01 00:57:27.715092 | orchestrator | Thursday 01 January 2026 00:55:10 +0000 (0:00:01.399) 0:09:36.404 ****** 2026-01-01 00:57:27.715097 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.715102 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.715107 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.715111 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.715116 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.715121 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.715125 | orchestrator | 2026-01-01 00:57:27.715130 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-01 00:57:27.715135 | orchestrator | Thursday 01 January 2026 00:55:12 +0000 (0:00:01.815) 0:09:38.219 ****** 2026-01-01 00:57:27.715140 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.715155 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.715160 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.715165 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.715169 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.715174 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.715179 | orchestrator | 2026-01-01 00:57:27.715183 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-01 00:57:27.715188 | orchestrator | Thursday 01 January 2026 00:55:16 +0000 (0:00:03.453) 0:09:41.673 ****** 2026-01-01 00:57:27.715192 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:57:27.715197 | orchestrator | 2026-01-01 00:57:27.715201 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-01 00:57:27.715206 | orchestrator | Thursday 01 January 2026 00:55:17 +0000 (0:00:01.401) 0:09:43.075 ****** 2026-01-01 00:57:27.715210 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715215 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715219 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715224 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.715228 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.715233 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.715237 | orchestrator | 2026-01-01 00:57:27.715242 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-01 00:57:27.715249 | orchestrator | Thursday 01 January 2026 00:55:18 +0000 (0:00:00.962) 0:09:44.038 ****** 2026-01-01 00:57:27.715254 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.715258 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.715266 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.715271 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:57:27.715275 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:57:27.715280 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:57:27.715284 | orchestrator | 2026-01-01 00:57:27.715289 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-01 00:57:27.715293 | orchestrator | Thursday 01 January 2026 00:55:20 +0000 (0:00:02.333) 0:09:46.371 ****** 2026-01-01 00:57:27.715298 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715302 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715306 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715311 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:57:27.715315 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:57:27.715320 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:57:27.715324 | orchestrator | 2026-01-01 00:57:27.715329 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-01 00:57:27.715333 | orchestrator | 2026-01-01 00:57:27.715338 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.715342 | orchestrator | Thursday 01 January 2026 00:55:22 +0000 (0:00:01.250) 0:09:47.622 ****** 2026-01-01 00:57:27.715347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.715351 | orchestrator | 2026-01-01 00:57:27.715356 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.715360 | orchestrator | Thursday 01 January 2026 00:55:22 +0000 (0:00:00.512) 0:09:48.135 ****** 2026-01-01 00:57:27.715365 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.715369 | orchestrator | 2026-01-01 00:57:27.715374 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.715378 | orchestrator | Thursday 01 January 2026 00:55:23 +0000 (0:00:00.827) 0:09:48.962 ****** 2026-01-01 00:57:27.715383 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715388 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715392 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715396 | orchestrator | 2026-01-01 00:57:27.715401 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.715405 | orchestrator | Thursday 01 January 2026 00:55:23 +0000 (0:00:00.332) 0:09:49.295 ****** 2026-01-01 00:57:27.715410 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715414 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715419 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715423 | orchestrator | 2026-01-01 00:57:27.715428 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.715432 | orchestrator | Thursday 01 January 2026 00:55:24 +0000 (0:00:00.741) 0:09:50.036 ****** 2026-01-01 00:57:27.715437 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715441 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715446 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715450 | orchestrator | 2026-01-01 00:57:27.715455 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.715459 | orchestrator | Thursday 01 January 2026 00:55:25 +0000 (0:00:01.100) 0:09:51.136 ****** 2026-01-01 00:57:27.715464 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715468 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715472 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715477 | orchestrator | 2026-01-01 00:57:27.715481 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.715486 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.736) 0:09:51.873 ****** 2026-01-01 00:57:27.715490 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715495 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715499 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715504 | orchestrator | 2026-01-01 00:57:27.715508 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.715517 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.346) 0:09:52.219 ****** 2026-01-01 00:57:27.715522 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715526 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715531 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715535 | orchestrator | 2026-01-01 00:57:27.715539 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.715544 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.348) 0:09:52.568 ****** 2026-01-01 00:57:27.715548 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715555 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715560 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715564 | orchestrator | 2026-01-01 00:57:27.715569 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.715573 | orchestrator | Thursday 01 January 2026 00:55:27 +0000 (0:00:00.643) 0:09:53.211 ****** 2026-01-01 00:57:27.715578 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715593 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715598 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715602 | orchestrator | 2026-01-01 00:57:27.715607 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.715611 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:00.803) 0:09:54.015 ****** 2026-01-01 00:57:27.715615 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715620 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715624 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715629 | orchestrator | 2026-01-01 00:57:27.715633 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.715638 | orchestrator | Thursday 01 January 2026 00:55:29 +0000 (0:00:00.783) 0:09:54.798 ****** 2026-01-01 00:57:27.715642 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715647 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715651 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715656 | orchestrator | 2026-01-01 00:57:27.715660 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.715667 | orchestrator | Thursday 01 January 2026 00:55:29 +0000 (0:00:00.425) 0:09:55.224 ****** 2026-01-01 00:57:27.715672 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715676 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715681 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715685 | orchestrator | 2026-01-01 00:57:27.715690 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.715694 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:00.683) 0:09:55.908 ****** 2026-01-01 00:57:27.715699 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715703 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715708 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715712 | orchestrator | 2026-01-01 00:57:27.715717 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.715721 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:00.392) 0:09:56.301 ****** 2026-01-01 00:57:27.715726 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715730 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715734 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715739 | orchestrator | 2026-01-01 00:57:27.715743 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.715748 | orchestrator | Thursday 01 January 2026 00:55:31 +0000 (0:00:00.396) 0:09:56.698 ****** 2026-01-01 00:57:27.715752 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715757 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715761 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715766 | orchestrator | 2026-01-01 00:57:27.715770 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.715774 | orchestrator | Thursday 01 January 2026 00:55:31 +0000 (0:00:00.347) 0:09:57.045 ****** 2026-01-01 00:57:27.715782 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715787 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715791 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715796 | orchestrator | 2026-01-01 00:57:27.715800 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.715805 | orchestrator | Thursday 01 January 2026 00:55:32 +0000 (0:00:00.606) 0:09:57.652 ****** 2026-01-01 00:57:27.715809 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715814 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715818 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715822 | orchestrator | 2026-01-01 00:57:27.715827 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.715831 | orchestrator | Thursday 01 January 2026 00:55:32 +0000 (0:00:00.493) 0:09:58.145 ****** 2026-01-01 00:57:27.715836 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715840 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715844 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715849 | orchestrator | 2026-01-01 00:57:27.715853 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.715858 | orchestrator | Thursday 01 January 2026 00:55:32 +0000 (0:00:00.445) 0:09:58.590 ****** 2026-01-01 00:57:27.715862 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715867 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715871 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715876 | orchestrator | 2026-01-01 00:57:27.715880 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.715885 | orchestrator | Thursday 01 January 2026 00:55:33 +0000 (0:00:00.328) 0:09:58.919 ****** 2026-01-01 00:57:27.715889 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.715894 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.715898 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.715902 | orchestrator | 2026-01-01 00:57:27.715907 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-01 00:57:27.715911 | orchestrator | Thursday 01 January 2026 00:55:34 +0000 (0:00:00.895) 0:09:59.815 ****** 2026-01-01 00:57:27.715916 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.715920 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.715925 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-01 00:57:27.715929 | orchestrator | 2026-01-01 00:57:27.715934 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-01 00:57:27.715938 | orchestrator | Thursday 01 January 2026 00:55:34 +0000 (0:00:00.410) 0:10:00.225 ****** 2026-01-01 00:57:27.715943 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.715947 | orchestrator | 2026-01-01 00:57:27.715951 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-01 00:57:27.715956 | orchestrator | Thursday 01 January 2026 00:55:36 +0000 (0:00:02.123) 0:10:02.348 ****** 2026-01-01 00:57:27.715963 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-01 00:57:27.715969 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.715973 | orchestrator | 2026-01-01 00:57:27.715978 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-01 00:57:27.715982 | orchestrator | Thursday 01 January 2026 00:55:36 +0000 (0:00:00.184) 0:10:02.532 ****** 2026-01-01 00:57:27.715987 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:57:27.715995 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:57:27.716004 | orchestrator | 2026-01-01 00:57:27.716011 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-01 00:57:27.716015 | orchestrator | Thursday 01 January 2026 00:55:45 +0000 (0:00:09.008) 0:10:11.541 ****** 2026-01-01 00:57:27.716020 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-01 00:57:27.716024 | orchestrator | 2026-01-01 00:57:27.716029 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-01 00:57:27.716033 | orchestrator | Thursday 01 January 2026 00:55:49 +0000 (0:00:03.750) 0:10:15.292 ****** 2026-01-01 00:57:27.716038 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.716042 | orchestrator | 2026-01-01 00:57:27.716047 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-01 00:57:27.716051 | orchestrator | Thursday 01 January 2026 00:55:50 +0000 (0:00:00.629) 0:10:15.922 ****** 2026-01-01 00:57:27.716056 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-01 00:57:27.716060 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-01 00:57:27.716065 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-01 00:57:27.716069 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-01 00:57:27.716074 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-01 00:57:27.716078 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-01 00:57:27.716083 | orchestrator | 2026-01-01 00:57:27.716087 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-01 00:57:27.716092 | orchestrator | Thursday 01 January 2026 00:55:51 +0000 (0:00:01.316) 0:10:17.239 ****** 2026-01-01 00:57:27.716096 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.716101 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 00:57:27.716105 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:57:27.716110 | orchestrator | 2026-01-01 00:57:27.716114 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-01 00:57:27.716119 | orchestrator | Thursday 01 January 2026 00:55:54 +0000 (0:00:02.770) 0:10:20.009 ****** 2026-01-01 00:57:27.716123 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 00:57:27.716128 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 00:57:27.716132 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716137 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 00:57:27.716141 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-01 00:57:27.716146 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716150 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 00:57:27.716154 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-01 00:57:27.716159 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716163 | orchestrator | 2026-01-01 00:57:27.716168 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-01 00:57:27.716172 | orchestrator | Thursday 01 January 2026 00:55:56 +0000 (0:00:02.242) 0:10:22.252 ****** 2026-01-01 00:57:27.716177 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716181 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716186 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716190 | orchestrator | 2026-01-01 00:57:27.716195 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-01 00:57:27.716199 | orchestrator | Thursday 01 January 2026 00:55:59 +0000 (0:00:02.896) 0:10:25.148 ****** 2026-01-01 00:57:27.716204 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716211 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716216 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716220 | orchestrator | 2026-01-01 00:57:27.716224 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-01 00:57:27.716229 | orchestrator | Thursday 01 January 2026 00:56:00 +0000 (0:00:00.509) 0:10:25.658 ****** 2026-01-01 00:57:27.716233 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.716238 | orchestrator | 2026-01-01 00:57:27.716242 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-01 00:57:27.716247 | orchestrator | Thursday 01 January 2026 00:56:01 +0000 (0:00:01.019) 0:10:26.677 ****** 2026-01-01 00:57:27.716254 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.716259 | orchestrator | 2026-01-01 00:57:27.716263 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-01 00:57:27.716268 | orchestrator | Thursday 01 January 2026 00:56:01 +0000 (0:00:00.649) 0:10:27.327 ****** 2026-01-01 00:57:27.716272 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716277 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716281 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716286 | orchestrator | 2026-01-01 00:57:27.716290 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-01 00:57:27.716295 | orchestrator | Thursday 01 January 2026 00:56:03 +0000 (0:00:01.500) 0:10:28.827 ****** 2026-01-01 00:57:27.716299 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716304 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716308 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716313 | orchestrator | 2026-01-01 00:57:27.716317 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-01 00:57:27.716322 | orchestrator | Thursday 01 January 2026 00:56:04 +0000 (0:00:01.387) 0:10:30.215 ****** 2026-01-01 00:57:27.716326 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716330 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716335 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716339 | orchestrator | 2026-01-01 00:57:27.716344 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-01 00:57:27.716351 | orchestrator | Thursday 01 January 2026 00:56:06 +0000 (0:00:01.904) 0:10:32.119 ****** 2026-01-01 00:57:27.716355 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716360 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716364 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716369 | orchestrator | 2026-01-01 00:57:27.716373 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-01 00:57:27.716378 | orchestrator | Thursday 01 January 2026 00:56:08 +0000 (0:00:01.927) 0:10:34.047 ****** 2026-01-01 00:57:27.716382 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716387 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716391 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716396 | orchestrator | 2026-01-01 00:57:27.716400 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 00:57:27.716404 | orchestrator | Thursday 01 January 2026 00:56:09 +0000 (0:00:01.501) 0:10:35.548 ****** 2026-01-01 00:57:27.716409 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716413 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716418 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716422 | orchestrator | 2026-01-01 00:57:27.716427 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-01 00:57:27.716431 | orchestrator | Thursday 01 January 2026 00:56:10 +0000 (0:00:00.689) 0:10:36.238 ****** 2026-01-01 00:57:27.716436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.716440 | orchestrator | 2026-01-01 00:57:27.716445 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-01 00:57:27.716452 | orchestrator | Thursday 01 January 2026 00:56:11 +0000 (0:00:00.798) 0:10:37.037 ****** 2026-01-01 00:57:27.716457 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716461 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716466 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716470 | orchestrator | 2026-01-01 00:57:27.716475 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-01 00:57:27.716479 | orchestrator | Thursday 01 January 2026 00:56:11 +0000 (0:00:00.341) 0:10:37.379 ****** 2026-01-01 00:57:27.716484 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.716488 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.716492 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.716497 | orchestrator | 2026-01-01 00:57:27.716501 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-01 00:57:27.716506 | orchestrator | Thursday 01 January 2026 00:56:13 +0000 (0:00:01.255) 0:10:38.635 ****** 2026-01-01 00:57:27.716510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.716515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.716519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.716524 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716528 | orchestrator | 2026-01-01 00:57:27.716533 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-01 00:57:27.716537 | orchestrator | Thursday 01 January 2026 00:56:13 +0000 (0:00:00.920) 0:10:39.555 ****** 2026-01-01 00:57:27.716542 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716546 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716551 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716555 | orchestrator | 2026-01-01 00:57:27.716560 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-01 00:57:27.716564 | orchestrator | 2026-01-01 00:57:27.716569 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-01 00:57:27.716573 | orchestrator | Thursday 01 January 2026 00:56:14 +0000 (0:00:00.897) 0:10:40.453 ****** 2026-01-01 00:57:27.716578 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.716592 | orchestrator | 2026-01-01 00:57:27.716597 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-01 00:57:27.716601 | orchestrator | Thursday 01 January 2026 00:56:15 +0000 (0:00:00.524) 0:10:40.977 ****** 2026-01-01 00:57:27.716606 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.716610 | orchestrator | 2026-01-01 00:57:27.716615 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-01 00:57:27.716619 | orchestrator | Thursday 01 January 2026 00:56:16 +0000 (0:00:00.867) 0:10:41.844 ****** 2026-01-01 00:57:27.716625 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716634 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716642 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716649 | orchestrator | 2026-01-01 00:57:27.716656 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-01 00:57:27.716661 | orchestrator | Thursday 01 January 2026 00:56:16 +0000 (0:00:00.315) 0:10:42.160 ****** 2026-01-01 00:57:27.716665 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716670 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716674 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716679 | orchestrator | 2026-01-01 00:57:27.716683 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-01 00:57:27.716688 | orchestrator | Thursday 01 January 2026 00:56:17 +0000 (0:00:00.817) 0:10:42.978 ****** 2026-01-01 00:57:27.716692 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716697 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716701 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716709 | orchestrator | 2026-01-01 00:57:27.716714 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-01 00:57:27.716718 | orchestrator | Thursday 01 January 2026 00:56:18 +0000 (0:00:01.054) 0:10:44.033 ****** 2026-01-01 00:57:27.716723 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716727 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716732 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716736 | orchestrator | 2026-01-01 00:57:27.716741 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-01 00:57:27.716745 | orchestrator | Thursday 01 January 2026 00:56:19 +0000 (0:00:00.844) 0:10:44.877 ****** 2026-01-01 00:57:27.716750 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716757 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716762 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716766 | orchestrator | 2026-01-01 00:57:27.716771 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-01 00:57:27.716775 | orchestrator | Thursday 01 January 2026 00:56:19 +0000 (0:00:00.332) 0:10:45.210 ****** 2026-01-01 00:57:27.716780 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716784 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716789 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716793 | orchestrator | 2026-01-01 00:57:27.716798 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-01 00:57:27.716802 | orchestrator | Thursday 01 January 2026 00:56:19 +0000 (0:00:00.325) 0:10:45.535 ****** 2026-01-01 00:57:27.716806 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716811 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716815 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716820 | orchestrator | 2026-01-01 00:57:27.716824 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-01 00:57:27.716829 | orchestrator | Thursday 01 January 2026 00:56:20 +0000 (0:00:00.458) 0:10:45.994 ****** 2026-01-01 00:57:27.716833 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716838 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716842 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716847 | orchestrator | 2026-01-01 00:57:27.716851 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-01 00:57:27.716856 | orchestrator | Thursday 01 January 2026 00:56:21 +0000 (0:00:00.685) 0:10:46.679 ****** 2026-01-01 00:57:27.716860 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716865 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716869 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716874 | orchestrator | 2026-01-01 00:57:27.716878 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-01 00:57:27.716882 | orchestrator | Thursday 01 January 2026 00:56:21 +0000 (0:00:00.700) 0:10:47.380 ****** 2026-01-01 00:57:27.716887 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716891 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716896 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716900 | orchestrator | 2026-01-01 00:57:27.716905 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-01 00:57:27.716909 | orchestrator | Thursday 01 January 2026 00:56:22 +0000 (0:00:00.286) 0:10:47.666 ****** 2026-01-01 00:57:27.716914 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.716918 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.716923 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.716927 | orchestrator | 2026-01-01 00:57:27.716932 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-01 00:57:27.716936 | orchestrator | Thursday 01 January 2026 00:56:22 +0000 (0:00:00.428) 0:10:48.094 ****** 2026-01-01 00:57:27.716943 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.716951 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.716958 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.716969 | orchestrator | 2026-01-01 00:57:27.716978 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-01 00:57:27.716993 | orchestrator | Thursday 01 January 2026 00:56:22 +0000 (0:00:00.301) 0:10:48.396 ****** 2026-01-01 00:57:27.717001 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.717007 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.717014 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.717021 | orchestrator | 2026-01-01 00:57:27.717029 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-01 00:57:27.717036 | orchestrator | Thursday 01 January 2026 00:56:23 +0000 (0:00:00.292) 0:10:48.689 ****** 2026-01-01 00:57:27.717044 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.717051 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.717059 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.717067 | orchestrator | 2026-01-01 00:57:27.717073 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-01 00:57:27.717078 | orchestrator | Thursday 01 January 2026 00:56:23 +0000 (0:00:00.317) 0:10:49.007 ****** 2026-01-01 00:57:27.717082 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717087 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.717091 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.717096 | orchestrator | 2026-01-01 00:57:27.717100 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-01 00:57:27.717105 | orchestrator | Thursday 01 January 2026 00:56:23 +0000 (0:00:00.478) 0:10:49.485 ****** 2026-01-01 00:57:27.717109 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717114 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.717118 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.717123 | orchestrator | 2026-01-01 00:57:27.717127 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-01 00:57:27.717134 | orchestrator | Thursday 01 January 2026 00:56:24 +0000 (0:00:00.360) 0:10:49.846 ****** 2026-01-01 00:57:27.717139 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717144 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.717148 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.717153 | orchestrator | 2026-01-01 00:57:27.717157 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-01 00:57:27.717162 | orchestrator | Thursday 01 January 2026 00:56:24 +0000 (0:00:00.362) 0:10:50.208 ****** 2026-01-01 00:57:27.717166 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.717171 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.717175 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.717180 | orchestrator | 2026-01-01 00:57:27.717184 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-01 00:57:27.717189 | orchestrator | Thursday 01 January 2026 00:56:24 +0000 (0:00:00.333) 0:10:50.542 ****** 2026-01-01 00:57:27.717193 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.717198 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.717202 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.717207 | orchestrator | 2026-01-01 00:57:27.717211 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-01 00:57:27.717216 | orchestrator | Thursday 01 January 2026 00:56:25 +0000 (0:00:00.854) 0:10:51.396 ****** 2026-01-01 00:57:27.717224 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.717232 | orchestrator | 2026-01-01 00:57:27.717240 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-01 00:57:27.717248 | orchestrator | Thursday 01 January 2026 00:56:26 +0000 (0:00:00.553) 0:10:51.950 ****** 2026-01-01 00:57:27.717254 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717262 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 00:57:27.717269 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:57:27.717277 | orchestrator | 2026-01-01 00:57:27.717284 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-01 00:57:27.717296 | orchestrator | Thursday 01 January 2026 00:56:28 +0000 (0:00:02.245) 0:10:54.195 ****** 2026-01-01 00:57:27.717302 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 00:57:27.717309 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-01 00:57:27.717316 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.717323 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 00:57:27.717330 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-01 00:57:27.717337 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.717345 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 00:57:27.717352 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-01 00:57:27.717360 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.717367 | orchestrator | 2026-01-01 00:57:27.717374 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-01 00:57:27.717382 | orchestrator | Thursday 01 January 2026 00:56:30 +0000 (0:00:01.612) 0:10:55.808 ****** 2026-01-01 00:57:27.717390 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717398 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.717406 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.717414 | orchestrator | 2026-01-01 00:57:27.717421 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-01 00:57:27.717429 | orchestrator | Thursday 01 January 2026 00:56:30 +0000 (0:00:00.358) 0:10:56.167 ****** 2026-01-01 00:57:27.717434 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.717438 | orchestrator | 2026-01-01 00:57:27.717443 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-01 00:57:27.717447 | orchestrator | Thursday 01 January 2026 00:56:31 +0000 (0:00:00.567) 0:10:56.734 ****** 2026-01-01 00:57:27.717452 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.717457 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.717461 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.717466 | orchestrator | 2026-01-01 00:57:27.717470 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-01 00:57:27.717475 | orchestrator | Thursday 01 January 2026 00:56:32 +0000 (0:00:01.394) 0:10:58.128 ****** 2026-01-01 00:57:27.717479 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717484 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-01 00:57:27.717488 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717493 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-01 00:57:27.717497 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717502 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-01 00:57:27.717506 | orchestrator | 2026-01-01 00:57:27.717510 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-01 00:57:27.717518 | orchestrator | Thursday 01 January 2026 00:56:36 +0000 (0:00:04.388) 0:11:02.517 ****** 2026-01-01 00:57:27.717522 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717527 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:57:27.717531 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717539 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:57:27.717544 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:57:27.717548 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:57:27.717552 | orchestrator | 2026-01-01 00:57:27.717557 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-01 00:57:27.717561 | orchestrator | Thursday 01 January 2026 00:56:39 +0000 (0:00:02.572) 0:11:05.090 ****** 2026-01-01 00:57:27.717566 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-01 00:57:27.717570 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.717575 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-01 00:57:27.717579 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.717621 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-01 00:57:27.717629 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.717634 | orchestrator | 2026-01-01 00:57:27.717643 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-01 00:57:27.717648 | orchestrator | Thursday 01 January 2026 00:56:40 +0000 (0:00:01.482) 0:11:06.572 ****** 2026-01-01 00:57:27.717652 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-01 00:57:27.717657 | orchestrator | 2026-01-01 00:57:27.717662 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-01 00:57:27.717666 | orchestrator | Thursday 01 January 2026 00:56:41 +0000 (0:00:00.238) 0:11:06.811 ****** 2026-01-01 00:57:27.717671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717694 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717698 | orchestrator | 2026-01-01 00:57:27.717702 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-01 00:57:27.717707 | orchestrator | Thursday 01 January 2026 00:56:42 +0000 (0:00:01.266) 0:11:08.077 ****** 2026-01-01 00:57:27.717711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-01 00:57:27.717734 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717738 | orchestrator | 2026-01-01 00:57:27.717743 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-01 00:57:27.717747 | orchestrator | Thursday 01 January 2026 00:56:43 +0000 (0:00:00.709) 0:11:08.787 ****** 2026-01-01 00:57:27.717752 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 00:57:27.717762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 00:57:27.717767 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 00:57:27.717771 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 00:57:27.717776 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-01 00:57:27.717780 | orchestrator | 2026-01-01 00:57:27.717785 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-01 00:57:27.717792 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:30.016) 0:11:38.803 ****** 2026-01-01 00:57:27.717797 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717801 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.717806 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.717810 | orchestrator | 2026-01-01 00:57:27.717815 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-01 00:57:27.717819 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:00.354) 0:11:39.158 ****** 2026-01-01 00:57:27.717824 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.717828 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.717833 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.717837 | orchestrator | 2026-01-01 00:57:27.717842 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-01 00:57:27.717846 | orchestrator | Thursday 01 January 2026 00:57:13 +0000 (0:00:00.329) 0:11:39.487 ****** 2026-01-01 00:57:27.717851 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.717855 | orchestrator | 2026-01-01 00:57:27.717860 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-01 00:57:27.717864 | orchestrator | Thursday 01 January 2026 00:57:14 +0000 (0:00:00.822) 0:11:40.310 ****** 2026-01-01 00:57:27.717871 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.717876 | orchestrator | 2026-01-01 00:57:27.717880 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-01 00:57:27.717885 | orchestrator | Thursday 01 January 2026 00:57:15 +0000 (0:00:00.638) 0:11:40.949 ****** 2026-01-01 00:57:27.717889 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.717894 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.717898 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.717903 | orchestrator | 2026-01-01 00:57:27.717907 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-01 00:57:27.717912 | orchestrator | Thursday 01 January 2026 00:57:16 +0000 (0:00:01.242) 0:11:42.191 ****** 2026-01-01 00:57:27.717916 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.717921 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.717925 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.717929 | orchestrator | 2026-01-01 00:57:27.717934 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-01 00:57:27.717942 | orchestrator | Thursday 01 January 2026 00:57:18 +0000 (0:00:01.444) 0:11:43.636 ****** 2026-01-01 00:57:27.717949 | orchestrator | changed: [testbed-node-3] 2026-01-01 00:57:27.717957 | orchestrator | changed: [testbed-node-5] 2026-01-01 00:57:27.717964 | orchestrator | changed: [testbed-node-4] 2026-01-01 00:57:27.717971 | orchestrator | 2026-01-01 00:57:27.717977 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-01 00:57:27.717983 | orchestrator | Thursday 01 January 2026 00:57:19 +0000 (0:00:01.879) 0:11:45.516 ****** 2026-01-01 00:57:27.717992 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.717998 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.718003 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-01 00:57:27.718009 | orchestrator | 2026-01-01 00:57:27.718034 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-01 00:57:27.718041 | orchestrator | Thursday 01 January 2026 00:57:22 +0000 (0:00:02.899) 0:11:48.415 ****** 2026-01-01 00:57:27.718048 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.718055 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.718063 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.718070 | orchestrator | 2026-01-01 00:57:27.718077 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-01 00:57:27.718084 | orchestrator | Thursday 01 January 2026 00:57:23 +0000 (0:00:00.437) 0:11:48.853 ****** 2026-01-01 00:57:27.718092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:57:27.718099 | orchestrator | 2026-01-01 00:57:27.718105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-01 00:57:27.718112 | orchestrator | Thursday 01 January 2026 00:57:23 +0000 (0:00:00.537) 0:11:49.390 ****** 2026-01-01 00:57:27.718119 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.718126 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.718132 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.718138 | orchestrator | 2026-01-01 00:57:27.718145 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-01 00:57:27.718152 | orchestrator | Thursday 01 January 2026 00:57:24 +0000 (0:00:00.476) 0:11:49.866 ****** 2026-01-01 00:57:27.718159 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.718167 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:57:27.718174 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:57:27.718181 | orchestrator | 2026-01-01 00:57:27.718188 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-01 00:57:27.718196 | orchestrator | Thursday 01 January 2026 00:57:24 +0000 (0:00:00.311) 0:11:50.178 ****** 2026-01-01 00:57:27.718203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:57:27.718210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:57:27.718214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:57:27.718218 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:57:27.718222 | orchestrator | 2026-01-01 00:57:27.718226 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-01 00:57:27.718230 | orchestrator | Thursday 01 January 2026 00:57:25 +0000 (0:00:00.645) 0:11:50.823 ****** 2026-01-01 00:57:27.718235 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:57:27.718241 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:57:27.718246 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:57:27.718250 | orchestrator | 2026-01-01 00:57:27.718254 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:57:27.718258 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-01 00:57:27.718262 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-01 00:57:27.718266 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-01 00:57:27.718270 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-01 00:57:27.718278 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-01 00:57:27.718286 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-01 00:57:27.718291 | orchestrator | 2026-01-01 00:57:27.718295 | orchestrator | 2026-01-01 00:57:27.718299 | orchestrator | 2026-01-01 00:57:27.718303 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:57:27.718307 | orchestrator | Thursday 01 January 2026 00:57:25 +0000 (0:00:00.315) 0:11:51.139 ****** 2026-01-01 00:57:27.718311 | orchestrator | =============================================================================== 2026-01-01 00:57:27.718315 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.46s 2026-01-01 00:57:27.718320 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.57s 2026-01-01 00:57:27.718324 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.16s 2026-01-01 00:57:27.718328 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.02s 2026-01-01 00:57:27.718333 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.24s 2026-01-01 00:57:27.718338 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.50s 2026-01-01 00:57:27.718342 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.57s 2026-01-01 00:57:27.718346 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.18s 2026-01-01 00:57:27.718350 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.16s 2026-01-01 00:57:27.718354 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.01s 2026-01-01 00:57:27.718358 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.68s 2026-01-01 00:57:27.718362 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.85s 2026-01-01 00:57:27.718366 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.54s 2026-01-01 00:57:27.718370 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.41s 2026-01-01 00:57:27.718374 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.50s 2026-01-01 00:57:27.718378 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.39s 2026-01-01 00:57:27.718382 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.18s 2026-01-01 00:57:27.718386 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.98s 2026-01-01 00:57:27.718390 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.75s 2026-01-01 00:57:27.718394 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.49s 2026-01-01 00:57:27.718398 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:27.718402 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:27.718406 | orchestrator | 2026-01-01 00:57:27 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:27.718410 | orchestrator | 2026-01-01 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:30.757709 | orchestrator | 2026-01-01 00:57:30 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:30.759768 | orchestrator | 2026-01-01 00:57:30 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:30.762489 | orchestrator | 2026-01-01 00:57:30 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:30.762523 | orchestrator | 2026-01-01 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:33.812167 | orchestrator | 2026-01-01 00:57:33 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:33.812265 | orchestrator | 2026-01-01 00:57:33 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:33.815019 | orchestrator | 2026-01-01 00:57:33 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:33.815083 | orchestrator | 2026-01-01 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:36.856934 | orchestrator | 2026-01-01 00:57:36 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:36.858749 | orchestrator | 2026-01-01 00:57:36 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:36.859382 | orchestrator | 2026-01-01 00:57:36 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:36.859409 | orchestrator | 2026-01-01 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:39.903694 | orchestrator | 2026-01-01 00:57:39 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:39.905372 | orchestrator | 2026-01-01 00:57:39 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:39.907042 | orchestrator | 2026-01-01 00:57:39 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:39.907133 | orchestrator | 2026-01-01 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:42.943558 | orchestrator | 2026-01-01 00:57:42 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:42.944095 | orchestrator | 2026-01-01 00:57:42 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:42.947692 | orchestrator | 2026-01-01 00:57:42 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:42.949362 | orchestrator | 2026-01-01 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:46.010519 | orchestrator | 2026-01-01 00:57:46 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:46.025976 | orchestrator | 2026-01-01 00:57:46 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:46.028100 | orchestrator | 2026-01-01 00:57:46 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:46.028205 | orchestrator | 2026-01-01 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:49.081215 | orchestrator | 2026-01-01 00:57:49 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:49.084449 | orchestrator | 2026-01-01 00:57:49 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:49.087982 | orchestrator | 2026-01-01 00:57:49 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:49.088262 | orchestrator | 2026-01-01 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:52.128963 | orchestrator | 2026-01-01 00:57:52 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:52.129418 | orchestrator | 2026-01-01 00:57:52 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:52.130272 | orchestrator | 2026-01-01 00:57:52 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:52.130294 | orchestrator | 2026-01-01 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:55.170705 | orchestrator | 2026-01-01 00:57:55 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:55.171060 | orchestrator | 2026-01-01 00:57:55 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:55.172307 | orchestrator | 2026-01-01 00:57:55 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:55.172334 | orchestrator | 2026-01-01 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:57:58.217109 | orchestrator | 2026-01-01 00:57:58 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:57:58.217288 | orchestrator | 2026-01-01 00:57:58 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:57:58.218763 | orchestrator | 2026-01-01 00:57:58 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:57:58.219389 | orchestrator | 2026-01-01 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:01.277303 | orchestrator | 2026-01-01 00:58:01 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:01.279924 | orchestrator | 2026-01-01 00:58:01 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:01.281046 | orchestrator | 2026-01-01 00:58:01 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:58:01.281064 | orchestrator | 2026-01-01 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:04.332739 | orchestrator | 2026-01-01 00:58:04 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:04.335146 | orchestrator | 2026-01-01 00:58:04 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:04.337338 | orchestrator | 2026-01-01 00:58:04 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:58:04.337375 | orchestrator | 2026-01-01 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:07.403914 | orchestrator | 2026-01-01 00:58:07 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:07.406006 | orchestrator | 2026-01-01 00:58:07 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:07.407208 | orchestrator | 2026-01-01 00:58:07 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:58:07.407246 | orchestrator | 2026-01-01 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:10.456012 | orchestrator | 2026-01-01 00:58:10 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:10.458374 | orchestrator | 2026-01-01 00:58:10 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:10.461456 | orchestrator | 2026-01-01 00:58:10 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state STARTED 2026-01-01 00:58:10.462508 | orchestrator | 2026-01-01 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:13.507261 | orchestrator | 2026-01-01 00:58:13 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:13.509440 | orchestrator | 2026-01-01 00:58:13 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:13.514873 | orchestrator | 2026-01-01 00:58:13.514939 | orchestrator | 2026-01-01 00:58:13.514953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:58:13.515004 | orchestrator | 2026-01-01 00:58:13.515016 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:58:13.515059 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.341) 0:00:00.341 ****** 2026-01-01 00:58:13.515074 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:13.515090 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:13.515104 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:13.515119 | orchestrator | 2026-01-01 00:58:13.515133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:58:13.515149 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.411) 0:00:00.752 ****** 2026-01-01 00:58:13.515164 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-01 00:58:13.515180 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-01 00:58:13.515196 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-01 00:58:13.515212 | orchestrator | 2026-01-01 00:58:13.515221 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-01 00:58:13.515230 | orchestrator | 2026-01-01 00:58:13.515239 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 00:58:13.515375 | orchestrator | Thursday 01 January 2026 00:55:26 +0000 (0:00:00.456) 0:00:01.208 ****** 2026-01-01 00:58:13.515386 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:58:13.515395 | orchestrator | 2026-01-01 00:58:13.515404 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-01 00:58:13.515412 | orchestrator | Thursday 01 January 2026 00:55:27 +0000 (0:00:00.517) 0:00:01.726 ****** 2026-01-01 00:58:13.515421 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:58:13.515430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:58:13.515439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-01 00:58:13.515447 | orchestrator | 2026-01-01 00:58:13.515456 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-01 00:58:13.515465 | orchestrator | Thursday 01 January 2026 00:55:29 +0000 (0:00:01.691) 0:00:03.418 ****** 2026-01-01 00:58:13.515489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.515502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.515525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.515548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.515559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.515604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.515614 | orchestrator | 2026-01-01 00:58:13.515623 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 00:58:13.515638 | orchestrator | Thursday 01 January 2026 00:55:31 +0000 (0:00:02.231) 0:00:05.650 ****** 2026-01-01 00:58:13.515647 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:58:13.515656 | orchestrator | 2026-01-01 00:58:13.515665 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-01 00:58:13.515673 | orchestrator | Thursday 01 January 2026 00:55:32 +0000 (0:00:01.039) 0:00:06.689 ****** 2026-01-01 00:58:13.515693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.515703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.515712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.515726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.515747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.515757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.515767 | orchestrator | 2026-01-01 00:58:13.515776 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-01 00:58:13.515785 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:03.256) 0:00:09.946 ****** 2026-01-01 00:58:13.515794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:58:13.515808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:58:13.515823 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:13.515832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:58:13.515848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:58:13.515858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:58:13.515867 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:13.515881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:58:13.515896 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:13.515905 | orchestrator | 2026-01-01 00:58:13.515914 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-01 00:58:13.515923 | orchestrator | Thursday 01 January 2026 00:55:36 +0000 (0:00:00.912) 0:00:10.858 ****** 2026-01-01 00:58:13.515932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:58:13.515948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:58:13.515957 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:13.515967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:58:13.515980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:58:13.516027 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:13.516038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-01 00:58:13.516057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-01 00:58:13.516068 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:13.516079 | orchestrator | 2026-01-01 00:58:13.516089 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-01 00:58:13.516098 | orchestrator | Thursday 01 January 2026 00:55:37 +0000 (0:00:00.791) 0:00:11.650 ****** 2026-01-01 00:58:13.516108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.516124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.516146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.516164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.516176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.516192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.516209 | orchestrator | 2026-01-01 00:58:13.516220 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-01 00:58:13.516231 | orchestrator | Thursday 01 January 2026 00:55:39 +0000 (0:00:02.386) 0:00:14.036 ****** 2026-01-01 00:58:13.516241 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:13.516251 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:13.516261 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:13.516271 | orchestrator | 2026-01-01 00:58:13.516281 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-01 00:58:13.516292 | orchestrator | Thursday 01 January 2026 00:55:42 +0000 (0:00:02.526) 0:00:16.562 ****** 2026-01-01 00:58:13.516302 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:13.516312 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:13.516322 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:13.516331 | orchestrator | 2026-01-01 00:58:13.516341 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-01 00:58:13.516351 | orchestrator | Thursday 01 January 2026 00:55:44 +0000 (0:00:02.201) 0:00:18.764 ****** 2026-01-01 00:58:13.516362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.516378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'co2026-01-01 00:58:13 | INFO  | Task 2835fe21-bd7d-46e6-b61a-ef7093a239d8 is in state SUCCESS 2026-01-01 00:58:13.516388 | orchestrator | 2026-01-01 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:13.516397 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.516407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-01 00:58:13.516427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.516438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.516454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-01 00:58:13.516464 | orchestrator | 2026-01-01 00:58:13.516473 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 00:58:13.516481 | orchestrator | Thursday 01 January 2026 00:55:46 +0000 (0:00:02.066) 0:00:20.830 ****** 2026-01-01 00:58:13.516490 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:13.516499 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:13.516508 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:13.516525 | orchestrator | 2026-01-01 00:58:13.516534 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-01 00:58:13.516543 | orchestrator | Thursday 01 January 2026 00:55:46 +0000 (0:00:00.325) 0:00:21.156 ****** 2026-01-01 00:58:13.516552 | orchestrator | 2026-01-01 00:58:13.516594 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-01 00:58:13.516605 | orchestrator | Thursday 01 January 2026 00:55:46 +0000 (0:00:00.071) 0:00:21.227 ****** 2026-01-01 00:58:13.516614 | orchestrator | 2026-01-01 00:58:13.516623 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-01 00:58:13.516631 | orchestrator | Thursday 01 January 2026 00:55:46 +0000 (0:00:00.068) 0:00:21.296 ****** 2026-01-01 00:58:13.516640 | orchestrator | 2026-01-01 00:58:13.516648 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-01 00:58:13.516657 | orchestrator | Thursday 01 January 2026 00:55:47 +0000 (0:00:00.075) 0:00:21.371 ****** 2026-01-01 00:58:13.516665 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:13.516674 | orchestrator | 2026-01-01 00:58:13.516682 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-01 00:58:13.516691 | orchestrator | Thursday 01 January 2026 00:55:47 +0000 (0:00:00.200) 0:00:21.572 ****** 2026-01-01 00:58:13.516699 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:13.516708 | orchestrator | 2026-01-01 00:58:13.516716 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-01 00:58:13.516725 | orchestrator | Thursday 01 January 2026 00:55:47 +0000 (0:00:00.711) 0:00:22.284 ****** 2026-01-01 00:58:13.516733 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:13.516747 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:13.516756 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:13.516764 | orchestrator | 2026-01-01 00:58:13.516773 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-01 00:58:13.516781 | orchestrator | Thursday 01 January 2026 00:56:47 +0000 (0:00:59.587) 0:01:21.871 ****** 2026-01-01 00:58:13.516790 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:13.516798 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:13.516806 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:13.516815 | orchestrator | 2026-01-01 00:58:13.516823 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-01 00:58:13.516832 | orchestrator | Thursday 01 January 2026 00:57:58 +0000 (0:01:10.624) 0:02:32.496 ****** 2026-01-01 00:58:13.516840 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:58:13.516849 | orchestrator | 2026-01-01 00:58:13.516857 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-01 00:58:13.516866 | orchestrator | Thursday 01 January 2026 00:57:58 +0000 (0:00:00.792) 0:02:33.289 ****** 2026-01-01 00:58:13.516874 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:13.516883 | orchestrator | 2026-01-01 00:58:13.516891 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-01 00:58:13.516900 | orchestrator | Thursday 01 January 2026 00:58:01 +0000 (0:00:02.834) 0:02:36.123 ****** 2026-01-01 00:58:13.516908 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:13.516916 | orchestrator | 2026-01-01 00:58:13.516925 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-01 00:58:13.516933 | orchestrator | Thursday 01 January 2026 00:58:04 +0000 (0:00:02.651) 0:02:38.775 ****** 2026-01-01 00:58:13.516942 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:13.516950 | orchestrator | 2026-01-01 00:58:13.516959 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-01 00:58:13.516967 | orchestrator | Thursday 01 January 2026 00:58:07 +0000 (0:00:03.085) 0:02:41.860 ****** 2026-01-01 00:58:13.516975 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:13.516984 | orchestrator | 2026-01-01 00:58:13.516992 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:58:13.517001 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-01 00:58:13.517016 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:58:13.517031 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-01 00:58:13.517040 | orchestrator | 2026-01-01 00:58:13.517049 | orchestrator | 2026-01-01 00:58:13.517058 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:58:13.517066 | orchestrator | Thursday 01 January 2026 00:58:10 +0000 (0:00:02.580) 0:02:44.441 ****** 2026-01-01 00:58:13.517075 | orchestrator | =============================================================================== 2026-01-01 00:58:13.517083 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 70.62s 2026-01-01 00:58:13.517092 | orchestrator | opensearch : Restart opensearch container ------------------------------ 59.59s 2026-01-01 00:58:13.517100 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.26s 2026-01-01 00:58:13.517109 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.09s 2026-01-01 00:58:13.517117 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.83s 2026-01-01 00:58:13.517126 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.65s 2026-01-01 00:58:13.517134 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.58s 2026-01-01 00:58:13.517142 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.53s 2026-01-01 00:58:13.517151 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2026-01-01 00:58:13.517159 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.23s 2026-01-01 00:58:13.517168 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.20s 2026-01-01 00:58:13.517176 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.07s 2026-01-01 00:58:13.517185 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.69s 2026-01-01 00:58:13.517193 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.04s 2026-01-01 00:58:13.517202 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.91s 2026-01-01 00:58:13.517210 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.79s 2026-01-01 00:58:13.517219 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.79s 2026-01-01 00:58:13.517227 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.71s 2026-01-01 00:58:13.517236 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-01-01 00:58:13.517245 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-01 00:58:16.558146 | orchestrator | 2026-01-01 00:58:16 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:16.559153 | orchestrator | 2026-01-01 00:58:16 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:16.559206 | orchestrator | 2026-01-01 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:19.610363 | orchestrator | 2026-01-01 00:58:19 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:19.612439 | orchestrator | 2026-01-01 00:58:19 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:19.612467 | orchestrator | 2026-01-01 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:22.657876 | orchestrator | 2026-01-01 00:58:22 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:22.659710 | orchestrator | 2026-01-01 00:58:22 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:22.659841 | orchestrator | 2026-01-01 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:25.705770 | orchestrator | 2026-01-01 00:58:25 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:25.708864 | orchestrator | 2026-01-01 00:58:25 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:25.708937 | orchestrator | 2026-01-01 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:28.759225 | orchestrator | 2026-01-01 00:58:28 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:28.760277 | orchestrator | 2026-01-01 00:58:28 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:28.760321 | orchestrator | 2026-01-01 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:31.813747 | orchestrator | 2026-01-01 00:58:31 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state STARTED 2026-01-01 00:58:31.815742 | orchestrator | 2026-01-01 00:58:31 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:31.815793 | orchestrator | 2026-01-01 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:34.867339 | orchestrator | 2026-01-01 00:58:34 | INFO  | Task aa70cdef-9691-4e4a-9469-7286766e9a1a is in state SUCCESS 2026-01-01 00:58:34.869400 | orchestrator | 2026-01-01 00:58:34.869455 | orchestrator | 2026-01-01 00:58:34.869467 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-01 00:58:34.869478 | orchestrator | 2026-01-01 00:58:34.869488 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-01 00:58:34.869504 | orchestrator | Thursday 01 January 2026 00:55:25 +0000 (0:00:00.095) 0:00:00.095 ****** 2026-01-01 00:58:34.869519 | orchestrator | ok: [localhost] => { 2026-01-01 00:58:34.869535 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-01 00:58:34.869742 | orchestrator | } 2026-01-01 00:58:34.869765 | orchestrator | 2026-01-01 00:58:34.869781 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-01 00:58:34.870116 | orchestrator | Thursday 01 January 2026 00:55:25 +0000 (0:00:00.046) 0:00:00.141 ****** 2026-01-01 00:58:34.870148 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-01 00:58:34.870165 | orchestrator | ...ignoring 2026-01-01 00:58:34.870179 | orchestrator | 2026-01-01 00:58:34.870192 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-01 00:58:34.870206 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:02.873) 0:00:03.014 ****** 2026-01-01 00:58:34.870220 | orchestrator | skipping: [localhost] 2026-01-01 00:58:34.870232 | orchestrator | 2026-01-01 00:58:34.870373 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-01 00:58:34.870392 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:00.051) 0:00:03.066 ****** 2026-01-01 00:58:34.870406 | orchestrator | ok: [localhost] 2026-01-01 00:58:34.870419 | orchestrator | 2026-01-01 00:58:34.870429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:58:34.870509 | orchestrator | 2026-01-01 00:58:34.870522 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:58:34.870531 | orchestrator | Thursday 01 January 2026 00:55:28 +0000 (0:00:00.148) 0:00:03.215 ****** 2026-01-01 00:58:34.870540 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.870549 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.870584 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.870593 | orchestrator | 2026-01-01 00:58:34.870632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:58:34.870642 | orchestrator | Thursday 01 January 2026 00:55:29 +0000 (0:00:00.368) 0:00:03.583 ****** 2026-01-01 00:58:34.870651 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-01 00:58:34.870660 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-01 00:58:34.870668 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-01 00:58:34.870677 | orchestrator | 2026-01-01 00:58:34.870691 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-01 00:58:34.870705 | orchestrator | 2026-01-01 00:58:34.870720 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-01 00:58:34.870736 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:01.209) 0:00:04.793 ****** 2026-01-01 00:58:34.870747 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-01 00:58:34.870762 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-01 00:58:34.870795 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-01 00:58:34.870805 | orchestrator | 2026-01-01 00:58:34.870814 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 00:58:34.870822 | orchestrator | Thursday 01 January 2026 00:55:30 +0000 (0:00:00.398) 0:00:05.191 ****** 2026-01-01 00:58:34.870833 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:58:34.870849 | orchestrator | 2026-01-01 00:58:34.870864 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-01 00:58:34.870878 | orchestrator | Thursday 01 January 2026 00:55:31 +0000 (0:00:00.551) 0:00:05.743 ****** 2026-01-01 00:58:34.870916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.870938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.870970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.870987 | orchestrator | 2026-01-01 00:58:34.871005 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-01 00:58:34.871015 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:03.736) 0:00:09.479 ****** 2026-01-01 00:58:34.871028 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.871045 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.871061 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.871077 | orchestrator | 2026-01-01 00:58:34.871092 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-01 00:58:34.871108 | orchestrator | Thursday 01 January 2026 00:55:35 +0000 (0:00:00.617) 0:00:10.097 ****** 2026-01-01 00:58:34.871123 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.871132 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.871141 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.871157 | orchestrator | 2026-01-01 00:58:34.871168 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-01 00:58:34.871179 | orchestrator | Thursday 01 January 2026 00:55:37 +0000 (0:00:01.410) 0:00:11.508 ****** 2026-01-01 00:58:34.871195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.871215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.871228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.871245 | orchestrator | 2026-01-01 00:58:34.871256 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-01 00:58:34.871265 | orchestrator | Thursday 01 January 2026 00:55:40 +0000 (0:00:03.280) 0:00:14.788 ****** 2026-01-01 00:58:34.871276 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.871293 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.871308 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.871329 | orchestrator | 2026-01-01 00:58:34.871347 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-01 00:58:34.871361 | orchestrator | Thursday 01 January 2026 00:55:41 +0000 (0:00:01.260) 0:00:16.049 ****** 2026-01-01 00:58:34.871374 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:34.871386 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:34.871399 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.871413 | orchestrator | 2026-01-01 00:58:34.871426 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 00:58:34.871440 | orchestrator | Thursday 01 January 2026 00:55:46 +0000 (0:00:04.666) 0:00:20.715 ****** 2026-01-01 00:58:34.871455 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:58:34.871470 | orchestrator | 2026-01-01 00:58:34.871485 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-01 00:58:34.871523 | orchestrator | Thursday 01 January 2026 00:55:46 +0000 (0:00:00.551) 0:00:21.267 ****** 2026-01-01 00:58:34.871574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871604 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.871622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871632 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.871649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871664 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.871673 | orchestrator | 2026-01-01 00:58:34.871682 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-01 00:58:34.871690 | orchestrator | Thursday 01 January 2026 00:55:49 +0000 (0:00:02.657) 0:00:23.925 ****** 2026-01-01 00:58:34.871704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871714 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.871728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871744 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.871754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871763 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.871772 | orchestrator | 2026-01-01 00:58:34.871781 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-01 00:58:34.871789 | orchestrator | Thursday 01 January 2026 00:55:53 +0000 (0:00:03.558) 0:00:27.483 ****** 2026-01-01 00:58:34.871803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871830 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.871849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871859 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.871872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-01 00:58:34.871887 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.871897 | orchestrator | 2026-01-01 00:58:34.871912 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-01 00:58:34.871928 | orchestrator | Thursday 01 January 2026 00:55:56 +0000 (0:00:03.335) 0:00:30.819 ****** 2026-01-01 00:58:34.871956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.871978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.872013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-01 00:58:34.872028 | orchestrator | 2026-01-01 00:58:34.872037 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-01 00:58:34.872046 | orchestrator | Thursday 01 January 2026 00:56:00 +0000 (0:00:04.111) 0:00:34.930 ****** 2026-01-01 00:58:34.872055 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.872064 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:34.872072 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:34.872081 | orchestrator | 2026-01-01 00:58:34.872090 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-01 00:58:34.872098 | orchestrator | Thursday 01 January 2026 00:56:01 +0000 (0:00:01.022) 0:00:35.953 ****** 2026-01-01 00:58:34.872107 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.872116 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.872124 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.872133 | orchestrator | 2026-01-01 00:58:34.872141 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-01 00:58:34.872150 | orchestrator | Thursday 01 January 2026 00:56:02 +0000 (0:00:00.671) 0:00:36.625 ****** 2026-01-01 00:58:34.872158 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.872180 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.872190 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.872198 | orchestrator | 2026-01-01 00:58:34.872224 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-01 00:58:34.872239 | orchestrator | Thursday 01 January 2026 00:56:02 +0000 (0:00:00.473) 0:00:37.098 ****** 2026-01-01 00:58:34.872255 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-01 00:58:34.872270 | orchestrator | ...ignoring 2026-01-01 00:58:34.872285 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-01 00:58:34.872309 | orchestrator | ...ignoring 2026-01-01 00:58:34.872324 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-01 00:58:34.872334 | orchestrator | ...ignoring 2026-01-01 00:58:34.872343 | orchestrator | 2026-01-01 00:58:34.872352 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-01 00:58:34.872361 | orchestrator | Thursday 01 January 2026 00:56:13 +0000 (0:00:11.053) 0:00:48.152 ****** 2026-01-01 00:58:34.872369 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.872377 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.872386 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.872394 | orchestrator | 2026-01-01 00:58:34.872403 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-01 00:58:34.872412 | orchestrator | Thursday 01 January 2026 00:56:14 +0000 (0:00:00.479) 0:00:48.631 ****** 2026-01-01 00:58:34.872497 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.872517 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.872525 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.872534 | orchestrator | 2026-01-01 00:58:34.872544 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-01 00:58:34.872613 | orchestrator | Thursday 01 January 2026 00:56:14 +0000 (0:00:00.714) 0:00:49.346 ****** 2026-01-01 00:58:34.872629 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.872644 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.872658 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.872672 | orchestrator | 2026-01-01 00:58:34.872687 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-01 00:58:34.872702 | orchestrator | Thursday 01 January 2026 00:56:15 +0000 (0:00:00.506) 0:00:49.853 ****** 2026-01-01 00:58:34.872716 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.872731 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.872746 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.872762 | orchestrator | 2026-01-01 00:58:34.872776 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-01 00:58:34.872791 | orchestrator | Thursday 01 January 2026 00:56:15 +0000 (0:00:00.552) 0:00:50.406 ****** 2026-01-01 00:58:34.872807 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.872824 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.872840 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.872854 | orchestrator | 2026-01-01 00:58:34.872870 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-01 00:58:34.872885 | orchestrator | Thursday 01 January 2026 00:56:16 +0000 (0:00:00.462) 0:00:50.869 ****** 2026-01-01 00:58:34.872912 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.872922 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.872930 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.872939 | orchestrator | 2026-01-01 00:58:34.872948 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 00:58:34.872956 | orchestrator | Thursday 01 January 2026 00:56:17 +0000 (0:00:00.732) 0:00:51.601 ****** 2026-01-01 00:58:34.872965 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.872974 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.872983 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-01 00:58:34.872991 | orchestrator | 2026-01-01 00:58:34.873000 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-01 00:58:34.873009 | orchestrator | Thursday 01 January 2026 00:56:17 +0000 (0:00:00.406) 0:00:52.007 ****** 2026-01-01 00:58:34.873017 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.873026 | orchestrator | 2026-01-01 00:58:34.873034 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-01 00:58:34.873043 | orchestrator | Thursday 01 January 2026 00:56:27 +0000 (0:00:10.126) 0:01:02.134 ****** 2026-01-01 00:58:34.873061 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.873070 | orchestrator | 2026-01-01 00:58:34.873078 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-01 00:58:34.873087 | orchestrator | Thursday 01 January 2026 00:56:27 +0000 (0:00:00.144) 0:01:02.278 ****** 2026-01-01 00:58:34.873095 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.873104 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.873113 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.873121 | orchestrator | 2026-01-01 00:58:34.873130 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-01 00:58:34.873138 | orchestrator | Thursday 01 January 2026 00:56:28 +0000 (0:00:01.096) 0:01:03.374 ****** 2026-01-01 00:58:34.873147 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.873156 | orchestrator | 2026-01-01 00:58:34.873164 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-01 00:58:34.873173 | orchestrator | Thursday 01 January 2026 00:56:37 +0000 (0:00:08.478) 0:01:11.853 ****** 2026-01-01 00:58:34.873182 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.873190 | orchestrator | 2026-01-01 00:58:34.873199 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-01 00:58:34.873213 | orchestrator | Thursday 01 January 2026 00:56:39 +0000 (0:00:01.733) 0:01:13.586 ****** 2026-01-01 00:58:34.873227 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.873242 | orchestrator | 2026-01-01 00:58:34.873256 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-01 00:58:34.873270 | orchestrator | Thursday 01 January 2026 00:56:41 +0000 (0:00:02.633) 0:01:16.220 ****** 2026-01-01 00:58:34.873285 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.873300 | orchestrator | 2026-01-01 00:58:34.873315 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-01 00:58:34.873330 | orchestrator | Thursday 01 January 2026 00:56:41 +0000 (0:00:00.192) 0:01:16.412 ****** 2026-01-01 00:58:34.873344 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.873359 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.873368 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.873377 | orchestrator | 2026-01-01 00:58:34.873393 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-01 00:58:34.873402 | orchestrator | Thursday 01 January 2026 00:56:42 +0000 (0:00:00.433) 0:01:16.845 ****** 2026-01-01 00:58:34.873410 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.873419 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-01 00:58:34.873428 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:34.873436 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:34.873445 | orchestrator | 2026-01-01 00:58:34.873453 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-01 00:58:34.873462 | orchestrator | skipping: no hosts matched 2026-01-01 00:58:34.873471 | orchestrator | 2026-01-01 00:58:34.873479 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-01 00:58:34.873488 | orchestrator | 2026-01-01 00:58:34.873496 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-01 00:58:34.873505 | orchestrator | Thursday 01 January 2026 00:56:43 +0000 (0:00:00.620) 0:01:17.466 ****** 2026-01-01 00:58:34.873514 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:58:34.873524 | orchestrator | 2026-01-01 00:58:34.873537 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-01 00:58:34.873546 | orchestrator | Thursday 01 January 2026 00:57:07 +0000 (0:00:24.087) 0:01:41.553 ****** 2026-01-01 00:58:34.873583 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.873592 | orchestrator | 2026-01-01 00:58:34.873607 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-01 00:58:34.873622 | orchestrator | Thursday 01 January 2026 00:57:17 +0000 (0:00:10.571) 0:01:52.125 ****** 2026-01-01 00:58:34.873637 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.873655 | orchestrator | 2026-01-01 00:58:34.873664 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-01 00:58:34.873672 | orchestrator | 2026-01-01 00:58:34.873680 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-01 00:58:34.873689 | orchestrator | Thursday 01 January 2026 00:57:20 +0000 (0:00:02.615) 0:01:54.740 ****** 2026-01-01 00:58:34.873697 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:58:34.873709 | orchestrator | 2026-01-01 00:58:34.873724 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-01 00:58:34.873739 | orchestrator | Thursday 01 January 2026 00:57:45 +0000 (0:00:25.068) 0:02:19.809 ****** 2026-01-01 00:58:34.873755 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.873771 | orchestrator | 2026-01-01 00:58:34.873786 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-01 00:58:34.873801 | orchestrator | Thursday 01 January 2026 00:57:56 +0000 (0:00:11.557) 0:02:31.367 ****** 2026-01-01 00:58:34.873817 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.873831 | orchestrator | 2026-01-01 00:58:34.873846 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-01 00:58:34.873859 | orchestrator | 2026-01-01 00:58:34.873882 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-01 00:58:34.873896 | orchestrator | Thursday 01 January 2026 00:57:59 +0000 (0:00:02.780) 0:02:34.148 ****** 2026-01-01 00:58:34.873911 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.873926 | orchestrator | 2026-01-01 00:58:34.873941 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-01 00:58:34.873955 | orchestrator | Thursday 01 January 2026 00:58:12 +0000 (0:00:12.422) 0:02:46.570 ****** 2026-01-01 00:58:34.873969 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.873982 | orchestrator | 2026-01-01 00:58:34.873997 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-01 00:58:34.874012 | orchestrator | Thursday 01 January 2026 00:58:16 +0000 (0:00:04.593) 0:02:51.164 ****** 2026-01-01 00:58:34.874107 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.874124 | orchestrator | 2026-01-01 00:58:34.874134 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-01 00:58:34.874143 | orchestrator | 2026-01-01 00:58:34.874157 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-01 00:58:34.874171 | orchestrator | Thursday 01 January 2026 00:58:19 +0000 (0:00:02.846) 0:02:54.011 ****** 2026-01-01 00:58:34.874184 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:58:34.874197 | orchestrator | 2026-01-01 00:58:34.874209 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-01 00:58:34.874222 | orchestrator | Thursday 01 January 2026 00:58:20 +0000 (0:00:00.613) 0:02:54.625 ****** 2026-01-01 00:58:34.874234 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.874249 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.874264 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.874278 | orchestrator | 2026-01-01 00:58:34.874293 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-01 00:58:34.874302 | orchestrator | Thursday 01 January 2026 00:58:22 +0000 (0:00:02.523) 0:02:57.148 ****** 2026-01-01 00:58:34.874311 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.874319 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.874328 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.874336 | orchestrator | 2026-01-01 00:58:34.874345 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-01 00:58:34.874353 | orchestrator | Thursday 01 January 2026 00:58:25 +0000 (0:00:02.688) 0:02:59.836 ****** 2026-01-01 00:58:34.874362 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.874370 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.874379 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.874387 | orchestrator | 2026-01-01 00:58:34.874396 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-01 00:58:34.874418 | orchestrator | Thursday 01 January 2026 00:58:27 +0000 (0:00:02.556) 0:03:02.392 ****** 2026-01-01 00:58:34.874427 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.874435 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.874444 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:58:34.874452 | orchestrator | 2026-01-01 00:58:34.874461 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-01 00:58:34.874469 | orchestrator | Thursday 01 January 2026 00:58:30 +0000 (0:00:02.734) 0:03:05.126 ****** 2026-01-01 00:58:34.874477 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:58:34.874493 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:58:34.874502 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:58:34.874510 | orchestrator | 2026-01-01 00:58:34.874519 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-01 00:58:34.874527 | orchestrator | Thursday 01 January 2026 00:58:34 +0000 (0:00:03.370) 0:03:08.497 ****** 2026-01-01 00:58:34.874536 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:58:34.874545 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:58:34.874581 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:58:34.874590 | orchestrator | 2026-01-01 00:58:34.874599 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:58:34.874609 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-01 00:58:34.874619 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-01 00:58:34.874629 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-01 00:58:34.874638 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-01 00:58:34.874646 | orchestrator | 2026-01-01 00:58:34.874655 | orchestrator | 2026-01-01 00:58:34.874667 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:58:34.874681 | orchestrator | Thursday 01 January 2026 00:58:34 +0000 (0:00:00.252) 0:03:08.749 ****** 2026-01-01 00:58:34.874696 | orchestrator | =============================================================================== 2026-01-01 00:58:34.874710 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 49.16s 2026-01-01 00:58:34.874720 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 22.13s 2026-01-01 00:58:34.874730 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.42s 2026-01-01 00:58:34.874744 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.05s 2026-01-01 00:58:34.874760 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.13s 2026-01-01 00:58:34.874772 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.48s 2026-01-01 00:58:34.874790 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.40s 2026-01-01 00:58:34.874799 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.67s 2026-01-01 00:58:34.874807 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2026-01-01 00:58:34.874817 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.11s 2026-01-01 00:58:34.874832 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.74s 2026-01-01 00:58:34.874848 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.56s 2026-01-01 00:58:34.874864 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.37s 2026-01-01 00:58:34.874880 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.34s 2026-01-01 00:58:34.874908 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.28s 2026-01-01 00:58:34.874926 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2026-01-01 00:58:34.874945 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.85s 2026-01-01 00:58:34.874961 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.73s 2026-01-01 00:58:34.874978 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.69s 2026-01-01 00:58:34.874996 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.66s 2026-01-01 00:58:34.875012 | orchestrator | 2026-01-01 00:58:34 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:34.875027 | orchestrator | 2026-01-01 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:37.930271 | orchestrator | 2026-01-01 00:58:37 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:37.933920 | orchestrator | 2026-01-01 00:58:37 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:37.935540 | orchestrator | 2026-01-01 00:58:37 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:37.936260 | orchestrator | 2026-01-01 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:40.980974 | orchestrator | 2026-01-01 00:58:40 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:40.982712 | orchestrator | 2026-01-01 00:58:40 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:40.985297 | orchestrator | 2026-01-01 00:58:40 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:40.985356 | orchestrator | 2026-01-01 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:44.027682 | orchestrator | 2026-01-01 00:58:44 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:44.029727 | orchestrator | 2026-01-01 00:58:44 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:44.030965 | orchestrator | 2026-01-01 00:58:44 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:44.031005 | orchestrator | 2026-01-01 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:47.068103 | orchestrator | 2026-01-01 00:58:47 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:47.068853 | orchestrator | 2026-01-01 00:58:47 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:47.070074 | orchestrator | 2026-01-01 00:58:47 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:47.070595 | orchestrator | 2026-01-01 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:50.109840 | orchestrator | 2026-01-01 00:58:50 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:50.109926 | orchestrator | 2026-01-01 00:58:50 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:50.110687 | orchestrator | 2026-01-01 00:58:50 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:50.110733 | orchestrator | 2026-01-01 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:53.148937 | orchestrator | 2026-01-01 00:58:53 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:53.152045 | orchestrator | 2026-01-01 00:58:53 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:53.154146 | orchestrator | 2026-01-01 00:58:53 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:53.154261 | orchestrator | 2026-01-01 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:56.187913 | orchestrator | 2026-01-01 00:58:56 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:56.190629 | orchestrator | 2026-01-01 00:58:56 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:56.193393 | orchestrator | 2026-01-01 00:58:56 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:56.193444 | orchestrator | 2026-01-01 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:58:59.228630 | orchestrator | 2026-01-01 00:58:59 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:58:59.228709 | orchestrator | 2026-01-01 00:58:59 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:58:59.228718 | orchestrator | 2026-01-01 00:58:59 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:58:59.228957 | orchestrator | 2026-01-01 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:02.273205 | orchestrator | 2026-01-01 00:59:02 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:02.273976 | orchestrator | 2026-01-01 00:59:02 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:02.276239 | orchestrator | 2026-01-01 00:59:02 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:02.276283 | orchestrator | 2026-01-01 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:05.320940 | orchestrator | 2026-01-01 00:59:05 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:05.322806 | orchestrator | 2026-01-01 00:59:05 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:05.324641 | orchestrator | 2026-01-01 00:59:05 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:05.324677 | orchestrator | 2026-01-01 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:08.379383 | orchestrator | 2026-01-01 00:59:08 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:08.380207 | orchestrator | 2026-01-01 00:59:08 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:08.382689 | orchestrator | 2026-01-01 00:59:08 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:08.382776 | orchestrator | 2026-01-01 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:11.435862 | orchestrator | 2026-01-01 00:59:11 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:11.449490 | orchestrator | 2026-01-01 00:59:11 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:11.449567 | orchestrator | 2026-01-01 00:59:11 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:11.449579 | orchestrator | 2026-01-01 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:14.495135 | orchestrator | 2026-01-01 00:59:14 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:14.497340 | orchestrator | 2026-01-01 00:59:14 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:14.498409 | orchestrator | 2026-01-01 00:59:14 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:14.500577 | orchestrator | 2026-01-01 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:17.558242 | orchestrator | 2026-01-01 00:59:17 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:17.560978 | orchestrator | 2026-01-01 00:59:17 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:17.563101 | orchestrator | 2026-01-01 00:59:17 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:17.563233 | orchestrator | 2026-01-01 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:20.616140 | orchestrator | 2026-01-01 00:59:20 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:20.617832 | orchestrator | 2026-01-01 00:59:20 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:20.620124 | orchestrator | 2026-01-01 00:59:20 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:20.620605 | orchestrator | 2026-01-01 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:23.676509 | orchestrator | 2026-01-01 00:59:23 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:23.677821 | orchestrator | 2026-01-01 00:59:23 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:23.680116 | orchestrator | 2026-01-01 00:59:23 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:23.680138 | orchestrator | 2026-01-01 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:26.735657 | orchestrator | 2026-01-01 00:59:26 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:26.737523 | orchestrator | 2026-01-01 00:59:26 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:26.738712 | orchestrator | 2026-01-01 00:59:26 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:26.738761 | orchestrator | 2026-01-01 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:29.786235 | orchestrator | 2026-01-01 00:59:29 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:29.787578 | orchestrator | 2026-01-01 00:59:29 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:29.789008 | orchestrator | 2026-01-01 00:59:29 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:29.789564 | orchestrator | 2026-01-01 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:32.832847 | orchestrator | 2026-01-01 00:59:32 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:32.834453 | orchestrator | 2026-01-01 00:59:32 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:32.835892 | orchestrator | 2026-01-01 00:59:32 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:32.835942 | orchestrator | 2026-01-01 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:35.885942 | orchestrator | 2026-01-01 00:59:35 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:35.888263 | orchestrator | 2026-01-01 00:59:35 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:35.890341 | orchestrator | 2026-01-01 00:59:35 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:35.890771 | orchestrator | 2026-01-01 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:38.927108 | orchestrator | 2026-01-01 00:59:38 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:38.928073 | orchestrator | 2026-01-01 00:59:38 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:38.929445 | orchestrator | 2026-01-01 00:59:38 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:38.929481 | orchestrator | 2026-01-01 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:41.973554 | orchestrator | 2026-01-01 00:59:41 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:41.975849 | orchestrator | 2026-01-01 00:59:41 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state STARTED 2026-01-01 00:59:41.978343 | orchestrator | 2026-01-01 00:59:41 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:41.978467 | orchestrator | 2026-01-01 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:45.029222 | orchestrator | 2026-01-01 00:59:45 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state STARTED 2026-01-01 00:59:45.036693 | orchestrator | 2026-01-01 00:59:45 | INFO  | Task 5f9dab55-6eb0-41a1-a4da-ec7bf145f67f is in state SUCCESS 2026-01-01 00:59:45.038370 | orchestrator | 2026-01-01 00:59:45.038406 | orchestrator | 2026-01-01 00:59:45.038418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 00:59:45.038430 | orchestrator | 2026-01-01 00:59:45.038441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 00:59:45.038453 | orchestrator | Thursday 01 January 2026 00:58:39 +0000 (0:00:00.259) 0:00:00.259 ****** 2026-01-01 00:59:45.038464 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:45.038477 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:45.038488 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:45.038499 | orchestrator | 2026-01-01 00:59:45.038510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 00:59:45.038547 | orchestrator | Thursday 01 January 2026 00:58:39 +0000 (0:00:00.307) 0:00:00.566 ****** 2026-01-01 00:59:45.038559 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-01 00:59:45.038570 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-01 00:59:45.038581 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-01 00:59:45.038592 | orchestrator | 2026-01-01 00:59:45.038603 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-01 00:59:45.038615 | orchestrator | 2026-01-01 00:59:45.038720 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 00:59:45.038732 | orchestrator | Thursday 01 January 2026 00:58:40 +0000 (0:00:00.511) 0:00:01.078 ****** 2026-01-01 00:59:45.038744 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:45.038756 | orchestrator | 2026-01-01 00:59:45.038767 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-01 00:59:45.038778 | orchestrator | Thursday 01 January 2026 00:58:40 +0000 (0:00:00.605) 0:00:01.683 ****** 2026-01-01 00:59:45.038797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.038852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.038882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.038895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.038909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.038921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.038941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.038959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.038970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.038985 | orchestrator | 2026-01-01 00:59:45.038998 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-01 00:59:45.039011 | orchestrator | Thursday 01 January 2026 00:58:42 +0000 (0:00:01.844) 0:00:03.528 ****** 2026-01-01 00:59:45.039025 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.039040 | orchestrator | 2026-01-01 00:59:45.039726 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-01 00:59:45.039749 | orchestrator | Thursday 01 January 2026 00:58:42 +0000 (0:00:00.150) 0:00:03.678 ****** 2026-01-01 00:59:45.039760 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.039976 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.039988 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.040000 | orchestrator | 2026-01-01 00:59:45.040011 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-01 00:59:45.040022 | orchestrator | Thursday 01 January 2026 00:58:43 +0000 (0:00:00.528) 0:00:04.206 ****** 2026-01-01 00:59:45.040033 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:59:45.040071 | orchestrator | 2026-01-01 00:59:45.040082 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 00:59:45.040093 | orchestrator | Thursday 01 January 2026 00:58:44 +0000 (0:00:00.883) 0:00:05.089 ****** 2026-01-01 00:59:45.040104 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 00:59:45.040115 | orchestrator | 2026-01-01 00:59:45.040130 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-01 00:59:45.040148 | orchestrator | Thursday 01 January 2026 00:58:44 +0000 (0:00:00.548) 0:00:05.638 ****** 2026-01-01 00:59:45.040169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.040206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.040237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.040296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.040311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.040332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.040344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.040356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.040374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.040385 | orchestrator | 2026-01-01 00:59:45.040397 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-01 00:59:45.040408 | orchestrator | Thursday 01 January 2026 00:58:48 +0000 (0:00:03.701) 0:00:09.340 ****** 2026-01-01 00:59:45.040452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.040466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.040485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.040497 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.040510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.040566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.040580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.040591 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.040638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.040661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.040675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.040688 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.040701 | orchestrator | 2026-01-01 00:59:45.040714 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-01 00:59:45.040727 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.830) 0:00:10.170 ****** 2026-01-01 00:59:45.040747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.040760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.040801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.040821 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.040833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.040845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.040857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.040868 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.040885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.040927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.040953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.040965 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.040976 | orchestrator | 2026-01-01 00:59:45.040987 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-01 00:59:45.040998 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.779) 0:00:10.950 ****** 2026-01-01 00:59:45.041010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.041027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.041069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.041090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041170 | orchestrator | 2026-01-01 00:59:45.041182 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-01 00:59:45.041193 | orchestrator | Thursday 01 January 2026 00:58:53 +0000 (0:00:03.526) 0:00:14.477 ****** 2026-01-01 00:59:45.041236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.041250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.041262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.041279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.041319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.041344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.041356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.041390 | orchestrator | 2026-01-01 00:59:45.041401 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-01 00:59:45.041412 | orchestrator | Thursday 01 January 2026 00:58:59 +0000 (0:00:05.971) 0:00:20.448 ****** 2026-01-01 00:59:45.041423 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:45.041434 | orchestrator | changed: [testbed-node-1] 2026-01-01 00:59:45.041445 | orchestrator | changed: [testbed-node-2] 2026-01-01 00:59:45.041455 | orchestrator | 2026-01-01 00:59:45.041466 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-01 00:59:45.041477 | orchestrator | Thursday 01 January 2026 00:59:01 +0000 (0:00:01.845) 0:00:22.294 ****** 2026-01-01 00:59:45.041493 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.041511 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.041594 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.041607 | orchestrator | 2026-01-01 00:59:45.041618 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-01 00:59:45.041628 | orchestrator | Thursday 01 January 2026 00:59:01 +0000 (0:00:00.629) 0:00:22.923 ****** 2026-01-01 00:59:45.041639 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.041650 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.041661 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.041672 | orchestrator | 2026-01-01 00:59:45.041683 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-01 00:59:45.041694 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:00.319) 0:00:23.243 ****** 2026-01-01 00:59:45.041705 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.041716 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.041726 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.041737 | orchestrator | 2026-01-01 00:59:45.041748 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-01 00:59:45.041759 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:00.516) 0:00:23.760 ****** 2026-01-01 00:59:45.041806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.041821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.041832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.041844 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.041861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.041882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.041922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.041935 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.041947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-01 00:59:45.041959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-01 00:59:45.041970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-01 00:59:45.041989 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.042000 | orchestrator | 2026-01-01 00:59:45.042011 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 00:59:45.042083 | orchestrator | Thursday 01 January 2026 00:59:03 +0000 (0:00:00.733) 0:00:24.493 ****** 2026-01-01 00:59:45.042095 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.042105 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.042114 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.042124 | orchestrator | 2026-01-01 00:59:45.042134 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-01 00:59:45.042143 | orchestrator | Thursday 01 January 2026 00:59:03 +0000 (0:00:00.330) 0:00:24.824 ****** 2026-01-01 00:59:45.042153 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-01 00:59:45.042169 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-01 00:59:45.042179 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-01 00:59:45.042189 | orchestrator | 2026-01-01 00:59:45.042199 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-01 00:59:45.042208 | orchestrator | Thursday 01 January 2026 00:59:05 +0000 (0:00:01.684) 0:00:26.509 ****** 2026-01-01 00:59:45.042218 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:59:45.042228 | orchestrator | 2026-01-01 00:59:45.042238 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-01 00:59:45.042248 | orchestrator | Thursday 01 January 2026 00:59:06 +0000 (0:00:00.993) 0:00:27.502 ****** 2026-01-01 00:59:45.042257 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.042267 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.042276 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.042286 | orchestrator | 2026-01-01 00:59:45.042296 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-01 00:59:45.042305 | orchestrator | Thursday 01 January 2026 00:59:07 +0000 (0:00:01.027) 0:00:28.529 ****** 2026-01-01 00:59:45.042315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 00:59:45.042325 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 00:59:45.042334 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 00:59:45.042344 | orchestrator | 2026-01-01 00:59:45.042354 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-01 00:59:45.042370 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:01.154) 0:00:29.683 ****** 2026-01-01 00:59:45.042380 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:45.042391 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:45.042401 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:45.042410 | orchestrator | 2026-01-01 00:59:45.042420 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-01 00:59:45.042430 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:00.340) 0:00:30.023 ****** 2026-01-01 00:59:45.042440 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-01 00:59:45.042449 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-01 00:59:45.042459 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-01 00:59:45.042468 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-01 00:59:45.042478 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-01 00:59:45.042488 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-01 00:59:45.042498 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-01 00:59:45.042515 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-01 00:59:45.042542 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-01 00:59:45.042552 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-01 00:59:45.042562 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-01 00:59:45.042572 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-01 00:59:45.042582 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-01 00:59:45.042591 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-01 00:59:45.042601 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-01 00:59:45.042611 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-01 00:59:45.042621 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-01 00:59:45.042630 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-01 00:59:45.042640 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-01 00:59:45.042650 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-01 00:59:45.042660 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-01 00:59:45.042669 | orchestrator | 2026-01-01 00:59:45.042679 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-01 00:59:45.042689 | orchestrator | Thursday 01 January 2026 00:59:18 +0000 (0:00:09.558) 0:00:39.582 ****** 2026-01-01 00:59:45.042698 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-01 00:59:45.042708 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-01 00:59:45.042717 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-01 00:59:45.042727 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-01 00:59:45.042741 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-01 00:59:45.042751 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-01 00:59:45.042761 | orchestrator | 2026-01-01 00:59:45.042771 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-01 00:59:45.042780 | orchestrator | Thursday 01 January 2026 00:59:21 +0000 (0:00:03.120) 0:00:42.703 ****** 2026-01-01 00:59:45.042798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.042819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.042831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-01 00:59:45.042843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.042857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.042868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-01 00:59:45.042890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.042901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.042912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-01 00:59:45.042922 | orchestrator | 2026-01-01 00:59:45.042932 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-01 00:59:45.042941 | orchestrator | Thursday 01 January 2026 00:59:24 +0000 (0:00:02.540) 0:00:45.244 ****** 2026-01-01 00:59:45.042951 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.042961 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.042971 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.042981 | orchestrator | 2026-01-01 00:59:45.042991 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-01 00:59:45.043001 | orchestrator | Thursday 01 January 2026 00:59:24 +0000 (0:00:00.292) 0:00:45.537 ****** 2026-01-01 00:59:45.043013 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:45.043029 | orchestrator | 2026-01-01 00:59:45.043045 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-01 00:59:45.043060 | orchestrator | Thursday 01 January 2026 00:59:27 +0000 (0:00:02.522) 0:00:48.059 ****** 2026-01-01 00:59:45.043075 | orchestrator | changed: [testbed-node-0] 2026-01-01 00:59:45.043089 | orchestrator | 2026-01-01 00:59:45.043105 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-01 00:59:45.043120 | orchestrator | Thursday 01 January 2026 00:59:29 +0000 (0:00:02.741) 0:00:50.801 ****** 2026-01-01 00:59:45.043136 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:45.043153 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:45.043169 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:45.043186 | orchestrator | 2026-01-01 00:59:45.043196 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-01 00:59:45.043206 | orchestrator | Thursday 01 January 2026 00:59:30 +0000 (0:00:01.152) 0:00:51.953 ****** 2026-01-01 00:59:45.043215 | orchestrator | ok: [testbed-node-0] 2026-01-01 00:59:45.043225 | orchestrator | ok: [testbed-node-1] 2026-01-01 00:59:45.043239 | orchestrator | ok: [testbed-node-2] 2026-01-01 00:59:45.043249 | orchestrator | 2026-01-01 00:59:45.043259 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-01 00:59:45.043268 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:00.332) 0:00:52.286 ****** 2026-01-01 00:59:45.043285 | orchestrator | skipping: [testbed-node-0] 2026-01-01 00:59:45.043294 | orchestrator | skipping: [testbed-node-1] 2026-01-01 00:59:45.043304 | orchestrator | skipping: [testbed-node-2] 2026-01-01 00:59:45.043313 | orchestrator | 2026-01-01 00:59:45.043323 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-01 00:59:45.043333 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:00.362) 0:00:52.649 ****** 2026-01-01 00:59:45.043547 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "+ sudo -E kolla_set_configs\nINFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying service configuration files\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh\nINFO:__main__:Setting permission for /usr/bin/keystone-startup.sh\nINFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf\nINFO:__main__:Setting permission for /etc/keystone/keystone.conf\nINFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla\nINFO:__main__:Setting permission for /etc/keystone/fernet-keys\n++ cat /run_command\n+ CMD=/usr/bin/keystone-startup.sh\n+ ARGS=\n+ sudo kolla_copy_cacerts\nrehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL\n+ sudo kolla_install_projects\n+ [[ ! -n '' ]]\n+ . kolla_extend_start\n++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone\n++ [[ ! -d /var/log/kolla/keystone ]]\n++ mkdir -p /var/log/kolla/keystone\n+++ stat -c %U:%G /var/log/kolla/keystone\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]\n++ chown keystone:kolla /var/log/kolla/keystone\n++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'\n++ touch /var/log/kolla/keystone/keystone.log\n+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log\n++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]\n++ chown keystone:keystone /var/log/kolla/keystone/keystone.log\n+++ stat -c %a /var/log/kolla/keystone\n++ [[ 2755 != \\7\\5\\5 ]]\n++ chmod 755 /var/log/kolla/keystone\n++ EXTRA_KEYSTONE_MANAGE_ARGS=\n++ [[ -n '' ]]\n++ [[ -n '' ]]\n++ [[ -n 0 ]]\n++ sudo -H -u keystone keystone-manage db_sync\n2026-01-01 00:59:42.808 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342\n2026-01-01 00:59:42.815 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-01 00:59:42.815 1079 ERROR keystone Traceback (most recent call last):\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-01 00:59:42.815 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-01 00:59:42.815 1079 ERROR keystone return self.pool.connect()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-01 00:59:42.815 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-01 00:59:42.815 1079 ERROR keystone rec = pool._do_get()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-01 00:59:42.815 1079 ERROR keystone with util.safe_reraise():\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-01 00:59:42.815 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-01 00:59:42.815 1079 ERROR keystone return self._create_connection()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-01 00:59:42.815 1079 ERROR keystone self.__connect()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-01 00:59:42.815 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-01 00:59:42.815 1079 ERROR keystone self(*args, **kw)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-01 00:59:42.815 1079 ERROR keystone fn(*args, **kw)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-01 00:59:42.815 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-01 00:59:42.815 1079 ERROR keystone dialect.initialize(c)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-01 00:59:42.815 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-01 00:59:42.815 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-01 00:59:42.815 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-01 00:59:42.815 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-01 00:59:42.815 1079 ERROR keystone result = self._query(query)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-01 00:59:42.815 1079 ERROR keystone conn.query(q)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-01 00:59:42.815 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-01 00:59:42.815 1079 ERROR keystone result.read()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-01 00:59:42.815 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-01 00:59:42.815 1079 ERROR keystone packet.raise_for_error()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-01 00:59:42.815 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-01 00:59:42.815 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-01 00:59:42.815 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-01 00:59:42.815 1079 ERROR keystone \n2026-01-01 00:59:42.815 1079 ERROR keystone The above exception was the direct cause of the following exception:\n2026-01-01 00:59:42.815 1079 ERROR keystone \n2026-01-01 00:59:42.815 1079 ERROR keystone Traceback (most recent call last):\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in \n2026-01-01 00:59:42.815 1079 ERROR keystone sys.exit(main())\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main\n2026-01-01 00:59:42.815 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main\n2026-01-01 00:59:42.815 1079 ERROR keystone CONF.command.cmd_class.main()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main\n2026-01-01 00:59:42.815 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version\n2026-01-01 00:59:42.815 1079 ERROR keystone _db_sync(engine=engine)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync\n2026-01-01 00:59:42.815 1079 ERROR keystone with sql.session_for_write() as session:\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-01 00:59:42.815 1079 ERROR keystone return next(self.gen)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope\n2026-01-01 00:59:42.815 1079 ERROR keystone with current._produce_block(\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__\n2026-01-01 00:59:42.815 1079 ERROR keystone return next(self.gen)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session\n2026-01-01 00:59:42.815 1079 ERROR keystone self.session = self.factory._create_session(\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session\n2026-01-01 00:59:42.815 1079 ERROR keystone self._start()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start\n2026-01-01 00:59:42.815 1079 ERROR keystone self._setup_for_connection(\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection\n2026-01-01 00:59:42.815 1079 ERROR keystone engine = engines.create_engine(\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator\n2026-01-01 00:59:42.815 1079 ERROR keystone return wrapped(*args, **kwargs)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine\n2026-01-01 00:59:42.815 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection\n2026-01-01 00:59:42.815 1079 ERROR keystone return engine.connect()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect\n2026-01-01 00:59:42.815 1079 ERROR keystone return self._connection_cls(self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__\n2026-01-01 00:59:42.815 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection\n2026-01-01 00:59:42.815 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__\n2026-01-01 00:59:42.815 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection\n2026-01-01 00:59:42.815 1079 ERROR keystone return self.pool.connect()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect\n2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionFairy._checkout(self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout\n2026-01-01 00:59:42.815 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout\n2026-01-01 00:59:42.815 1079 ERROR keystone rec = pool._do_get()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get\n2026-01-01 00:59:42.815 1079 ERROR keystone with util.safe_reraise():\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__\n2026-01-01 00:59:42.815 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get\n2026-01-01 00:59:42.815 1079 ERROR keystone return self._create_connection()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection\n2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionRecord(self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__\n2026-01-01 00:59:42.815 1079 ERROR keystone self.__connect()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect\n2026-01-01 00:59:42.815 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run\n2026-01-01 00:59:42.815 1079 ERROR keystone self(*args, **kw)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__\n2026-01-01 00:59:42.815 1079 ERROR keystone fn(*args, **kw)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go\n2026-01-01 00:59:42.815 1079 ERROR keystone return once_fn(*arg, **kw)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect\n2026-01-01 00:59:42.815 1079 ERROR keystone dialect.initialize(c)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize\n2026-01-01 00:59:42.815 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize\n2026-01-01 00:59:42.815 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level\n2026-01-01 00:59:42.815 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level\n2026-01-01 00:59:42.815 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute\n2026-01-01 00:59:42.815 1079 ERROR keystone result = self._query(query)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query\n2026-01-01 00:59:42.815 1079 ERROR keystone conn.query(q)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query\n2026-01-01 00:59:42.815 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result\n2026-01-01 00:59:42.815 1079 ERROR keystone result.read()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read\n2026-01-01 00:59:42.815 1079 ERROR keystone first_packet = self.connection._read_packet()\n2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet\n2026-01-01 00:59:42.815 1079 ERROR keystone packet.raise_for_error()\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error\n2026-01-01 00:59:42.815 1079 ERROR keystone err.raise_mysql_exception(self._data)\n2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception\n2026-01-01 00:59:42.815 1079 ERROR keystone raise errorclass(errno, errval)\n2026-01-01 00:59:42.815 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")\n2026-01-01 00:59:42.815 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)\n2026-01-01 00:59:42.815 1079 ERROR keystone \n", "stderr_lines": ["+ sudo -E kolla_set_configs", "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying service configuration files", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone-startup.sh to /usr/bin/keystone-startup.sh", "INFO:__main__:Setting permission for /usr/bin/keystone-startup.sh", "INFO:__main__:Copying /var/lib/kolla/config_files/keystone.conf to /etc/keystone/keystone.conf", "INFO:__main__:Setting permission for /etc/keystone/keystone.conf", "INFO:__main__:Copying /var/lib/kolla/config_files/wsgi-keystone.conf to /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Setting permission for /etc/apache2/conf-enabled/wsgi-keystone.conf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla", "INFO:__main__:Setting permission for /etc/keystone/fernet-keys", "++ cat /run_command", "+ CMD=/usr/bin/keystone-startup.sh", "+ ARGS=", "+ sudo kolla_copy_cacerts", "rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL", "+ sudo kolla_install_projects", "+ [[ ! -n '' ]]", "+ . kolla_extend_start", "++ KEYSTONE_LOG_DIR=/var/log/kolla/keystone", "++ [[ ! -d /var/log/kolla/keystone ]]", "++ mkdir -p /var/log/kolla/keystone", "+++ stat -c %U:%G /var/log/kolla/keystone", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\o\\l\\l\\a ]]", "++ chown keystone:kolla /var/log/kolla/keystone", "++ '[' '!' -f /var/log/kolla/keystone/keystone.log ']'", "++ touch /var/log/kolla/keystone/keystone.log", "+++ stat -c %U:%G /var/log/kolla/keystone/keystone.log", "++ [[ root:kolla != \\k\\e\\y\\s\\t\\o\\n\\e\\:\\k\\e\\y\\s\\t\\o\\n\\e ]]", "++ chown keystone:keystone /var/log/kolla/keystone/keystone.log", "+++ stat -c %a /var/log/kolla/keystone", "++ [[ 2755 != \\7\\5\\5 ]]", "++ chmod 755 /var/log/kolla/keystone", "++ EXTRA_KEYSTONE_MANAGE_ARGS=", "++ [[ -n '' ]]", "++ [[ -n '' ]]", "++ [[ -n 0 ]]", "++ sudo -H -u keystone keystone-manage db_sync", "2026-01-01 00:59:42.808 1079 DEBUG oslo_db.sqlalchemy.engines [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py:342", "2026-01-01 00:59:42.815 1079 CRITICAL keystone [-] Unhandled error: sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "(Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-01 00:59:42.815 1079 ERROR keystone Traceback (most recent call last):", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-01 00:59:42.815 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-01 00:59:42.815 1079 ERROR keystone return self.pool.connect()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-01 00:59:42.815 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-01 00:59:42.815 1079 ERROR keystone rec = pool._do_get()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-01 00:59:42.815 1079 ERROR keystone with util.safe_reraise():", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-01 00:59:42.815 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-01 00:59:42.815 1079 ERROR keystone return self._create_connection()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-01 00:59:42.815 1079 ERROR keystone self.__connect()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-01 00:59:42.815 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-01 00:59:42.815 1079 ERROR keystone self(*args, **kw)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-01 00:59:42.815 1079 ERROR keystone fn(*args, **kw)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-01 00:59:42.815 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-01 00:59:42.815 1079 ERROR keystone dialect.initialize(c)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-01 00:59:42.815 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-01 00:59:42.815 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-01 00:59:42.815 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-01 00:59:42.815 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-01 00:59:42.815 1079 ERROR keystone result = self._query(query)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-01 00:59:42.815 1079 ERROR keystone conn.query(q)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-01 00:59:42.815 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-01 00:59:42.815 1079 ERROR keystone result.read()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-01 00:59:42.815 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-01 00:59:42.815 1079 ERROR keystone packet.raise_for_error()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-01 00:59:42.815 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-01 00:59:42.815 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-01 00:59:42.815 1079 ERROR keystone pymysql.err.OperationalError: (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-01 00:59:42.815 1079 ERROR keystone ", "2026-01-01 00:59:42.815 1079 ERROR keystone The above exception was the direct cause of the following exception:", "2026-01-01 00:59:42.815 1079 ERROR keystone ", "2026-01-01 00:59:42.815 1079 ERROR keystone Traceback (most recent call last):", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/bin/keystone-manage\", line 7, in ", "2026-01-01 00:59:42.815 1079 ERROR keystone sys.exit(main())", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/manage.py\", line 36, in main", "2026-01-01 00:59:42.815 1079 ERROR keystone cli.main(argv=sys.argv, developer_config_file=developer_config)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 1733, in main", "2026-01-01 00:59:42.815 1079 ERROR keystone CONF.command.cmd_class.main()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/cmd/cli.py\", line 493, in main", "2026-01-01 00:59:42.815 1079 ERROR keystone upgrades.offline_sync_database_to_version(CONF.command.version)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 328, in offline_sync_database_to_version", "2026-01-01 00:59:42.815 1079 ERROR keystone _db_sync(engine=engine)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/keystone/common/sql/upgrades.py\", line 217, in _db_sync", "2026-01-01 00:59:42.815 1079 ERROR keystone with sql.session_for_write() as session:", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-01 00:59:42.815 1079 ERROR keystone return next(self.gen)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 1042, in _transaction_scope", "2026-01-01 00:59:42.815 1079 ERROR keystone with current._produce_block(", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/usr/lib/python3.12/contextlib.py\", line 137, in __enter__", "2026-01-01 00:59:42.815 1079 ERROR keystone return next(self.gen)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 641, in _session", "2026-01-01 00:59:42.815 1079 ERROR keystone self.session = self.factory._create_session(", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 404, in _create_session", "2026-01-01 00:59:42.815 1079 ERROR keystone self._start()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 493, in _start", "2026-01-01 00:59:42.815 1079 ERROR keystone self._setup_for_connection(", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/enginefacade.py\", line 519, in _setup_for_connection", "2026-01-01 00:59:42.815 1079 ERROR keystone engine = engines.create_engine(", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/debtcollector/renames.py\", line 41, in decorator", "2026-01-01 00:59:42.815 1079 ERROR keystone return wrapped(*args, **kwargs)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 218, in create_engine", "2026-01-01 00:59:42.815 1079 ERROR keystone test_conn = _test_connection(engine, max_retries, retry_interval)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/oslo_db/sqlalchemy/engines.py\", line 411, in _test_connection", "2026-01-01 00:59:42.815 1079 ERROR keystone return engine.connect()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3278, in connect", "2026-01-01 00:59:42.815 1079 ERROR keystone return self._connection_cls(self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 148, in __init__", "2026-01-01 00:59:42.815 1079 ERROR keystone Connection._handle_dbapi_exception_noconnection(", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 2439, in _handle_dbapi_exception_noconnection", "2026-01-01 00:59:42.815 1079 ERROR keystone raise newraise.with_traceback(exc_info[2]) from e", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 146, in __init__", "2026-01-01 00:59:42.815 1079 ERROR keystone self._dbapi_connection = engine.raw_connection()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py\", line 3302, in raw_connection", "2026-01-01 00:59:42.815 1079 ERROR keystone return self.pool.connect()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 449, in connect", "2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionFairy._checkout(self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 1263, in _checkout", "2026-01-01 00:59:42.815 1079 ERROR keystone fairy = _ConnectionRecord.checkout(pool)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 712, in checkout", "2026-01-01 00:59:42.815 1079 ERROR keystone rec = pool._do_get()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 179, in _do_get", "2026-01-01 00:59:42.815 1079 ERROR keystone with util.safe_reraise():", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 146, in __exit__", "2026-01-01 00:59:42.815 1079 ERROR keystone raise exc_value.with_traceback(exc_tb)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/impl.py\", line 177, in _do_get", "2026-01-01 00:59:42.815 1079 ERROR keystone return self._create_connection()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 390, in _create_connection", "2026-01-01 00:59:42.815 1079 ERROR keystone return _ConnectionRecord(self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 674, in __init__", "2026-01-01 00:59:42.815 1079 ERROR keystone self.__connect()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/pool/base.py\", line 914, in __connect", "2026-01-01 00:59:42.815 1079 ERROR keystone )._exec_w_sync_on_first_run(self.dbapi_connection, self)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 483, in _exec_w_sync_on_first_run", "2026-01-01 00:59:42.815 1079 ERROR keystone self(*args, **kw)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/event/attr.py\", line 497, in __call__", "2026-01-01 00:59:42.815 1079 ERROR keystone fn(*args, **kw)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py\", line 1912, in go", "2026-01-01 00:59:42.815 1079 ERROR keystone return once_fn(*arg, **kw)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/create.py\", line 749, in first_connect", "2026-01-01 00:59:42.815 1079 ERROR keystone dialect.initialize(c)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2835, in initialize", "2026-01-01 00:59:42.815 1079 ERROR keystone default.DefaultDialect.initialize(self, connection)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 532, in initialize", "2026-01-01 00:59:42.815 1079 ERROR keystone self.default_isolation_level = self.get_default_isolation_level(", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py\", line 583, in get_default_isolation_level", "2026-01-01 00:59:42.815 1079 ERROR keystone return self.get_isolation_level(dbapi_conn)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/sqlalchemy/dialects/mysql/base.py\", line 2540, in get_isolation_level", "2026-01-01 00:59:42.815 1079 ERROR keystone cursor.execute(\"SELECT @@transaction_isolation\")", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 153, in execute", "2026-01-01 00:59:42.815 1079 ERROR keystone result = self._query(query)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/cursors.py\", line 322, in _query", "2026-01-01 00:59:42.815 1079 ERROR keystone conn.query(q)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 563, in query", "2026-01-01 00:59:42.815 1079 ERROR keystone self._affected_rows = self._read_query_result(unbuffered=unbuffered)", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 825, in _read_query_result", "2026-01-01 00:59:42.815 1079 ERROR keystone result.read()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 1199, in read", "2026-01-01 00:59:42.815 1079 ERROR keystone first_packet = self.connection._read_packet()", "2026-01-01 00:59:42.815 1079 ERROR keystone ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/connections.py\", line 775, in _read_packet", "2026-01-01 00:59:42.815 1079 ERROR keystone packet.raise_for_error()", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/protocol.py\", line 219, in raise_for_error", "2026-01-01 00:59:42.815 1079 ERROR keystone err.raise_mysql_exception(self._data)", "2026-01-01 00:59:42.815 1079 ERROR keystone File \"/var/lib/kolla/venv/lib/python3.12/site-packages/pymysql/err.py\", line 150, in raise_mysql_exception", "2026-01-01 00:59:42.815 1079 ERROR keystone raise errorclass(errno, errval)", "2026-01-01 00:59:42.815 1079 ERROR keystone sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1193, \"Unknown system variable 'transaction_isolation'\")", "2026-01-01 00:59:42.815 1079 ERROR keystone (Background on this error at: https://sqlalche.me/e/20/e3q8)", "2026-01-01 00:59:42.815 1079 ERROR keystone "], "stdout": "Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n", "stdout_lines": ["Updating certificates in /etc/ssl/certs...", "1 added, 0 removed; done.", "Running hooks in /etc/ca-certificates/update.d...", "done."]} 2026-01-01 00:59:45.043633 | orchestrator | 2026-01-01 00:59:45.043643 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:59:45.043654 | orchestrator | testbed-node-0 : ok=21  changed=11  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-01-01 00:59:45.043670 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-01 00:59:45.043680 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-01 00:59:45.043690 | orchestrator | 2026-01-01 00:59:45.043700 | orchestrator | 2026-01-01 00:59:45.043710 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:59:45.043720 | orchestrator | Thursday 01 January 2026 00:59:43 +0000 (0:00:12.253) 0:01:04.903 ****** 2026-01-01 00:59:45.043730 | orchestrator | =============================================================================== 2026-01-01 00:59:45.043739 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.25s 2026-01-01 00:59:45.043749 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.56s 2026-01-01 00:59:45.043758 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.97s 2026-01-01 00:59:45.043768 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.70s 2026-01-01 00:59:45.043777 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.53s 2026-01-01 00:59:45.043787 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.12s 2026-01-01 00:59:45.043796 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.74s 2026-01-01 00:59:45.043806 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.54s 2026-01-01 00:59:45.043815 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.52s 2026-01-01 00:59:45.043825 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.85s 2026-01-01 00:59:45.043834 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.84s 2026-01-01 00:59:45.043844 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.68s 2026-01-01 00:59:45.043853 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.15s 2026-01-01 00:59:45.043863 | orchestrator | keystone : Checking for any running keystone_fernet containers ---------- 1.15s 2026-01-01 00:59:45.043872 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 1.03s 2026-01-01 00:59:45.043882 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.99s 2026-01-01 00:59:45.043891 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.88s 2026-01-01 00:59:45.043901 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.83s 2026-01-01 00:59:45.043910 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.78s 2026-01-01 00:59:45.043920 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.73s 2026-01-01 00:59:45.043935 | orchestrator | 2026-01-01 00:59:45 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:45.043945 | orchestrator | 2026-01-01 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:48.086514 | orchestrator | 2026-01-01 00:59:48 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 00:59:48.088151 | orchestrator | 2026-01-01 00:59:48 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 00:59:48.092406 | orchestrator | 2026-01-01 00:59:48 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 00:59:48.098304 | orchestrator | 2026-01-01 00:59:48 | INFO  | Task 87c69f1f-dd52-459f-8f14-470db97dafd6 is in state SUCCESS 2026-01-01 00:59:48.099868 | orchestrator | 2026-01-01 00:59:48.099899 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 00:59:48.099908 | orchestrator | 2.16.14 2026-01-01 00:59:48.099917 | orchestrator | 2026-01-01 00:59:48.099924 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-01 00:59:48.099932 | orchestrator | 2026-01-01 00:59:48.099953 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-01 00:59:48.099960 | orchestrator | Thursday 01 January 2026 00:57:30 +0000 (0:00:00.695) 0:00:00.695 ****** 2026-01-01 00:59:48.099967 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:59:48.099974 | orchestrator | 2026-01-01 00:59:48.099980 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-01 00:59:48.099987 | orchestrator | Thursday 01 January 2026 00:57:31 +0000 (0:00:00.696) 0:00:01.392 ****** 2026-01-01 00:59:48.099993 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.099999 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100005 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100011 | orchestrator | 2026-01-01 00:59:48.100018 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-01 00:59:48.100024 | orchestrator | Thursday 01 January 2026 00:57:32 +0000 (0:00:00.711) 0:00:02.104 ****** 2026-01-01 00:59:48.100031 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100037 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100448 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100460 | orchestrator | 2026-01-01 00:59:48.100467 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-01 00:59:48.100473 | orchestrator | Thursday 01 January 2026 00:57:32 +0000 (0:00:00.321) 0:00:02.425 ****** 2026-01-01 00:59:48.100480 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100486 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100492 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100499 | orchestrator | 2026-01-01 00:59:48.100505 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-01 00:59:48.100512 | orchestrator | Thursday 01 January 2026 00:57:33 +0000 (0:00:00.814) 0:00:03.240 ****** 2026-01-01 00:59:48.100541 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100548 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100554 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100560 | orchestrator | 2026-01-01 00:59:48.100566 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-01 00:59:48.100573 | orchestrator | Thursday 01 January 2026 00:57:33 +0000 (0:00:00.324) 0:00:03.565 ****** 2026-01-01 00:59:48.100579 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100602 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100608 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100614 | orchestrator | 2026-01-01 00:59:48.100621 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-01 00:59:48.100627 | orchestrator | Thursday 01 January 2026 00:57:33 +0000 (0:00:00.284) 0:00:03.849 ****** 2026-01-01 00:59:48.100634 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100640 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100669 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100676 | orchestrator | 2026-01-01 00:59:48.100684 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-01 00:59:48.100690 | orchestrator | Thursday 01 January 2026 00:57:34 +0000 (0:00:00.349) 0:00:04.199 ****** 2026-01-01 00:59:48.100696 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.100704 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.100710 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.100716 | orchestrator | 2026-01-01 00:59:48.100722 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-01 00:59:48.100728 | orchestrator | Thursday 01 January 2026 00:57:34 +0000 (0:00:00.415) 0:00:04.615 ****** 2026-01-01 00:59:48.100734 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100740 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100746 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100753 | orchestrator | 2026-01-01 00:59:48.100759 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-01 00:59:48.100765 | orchestrator | Thursday 01 January 2026 00:57:35 +0000 (0:00:00.272) 0:00:04.888 ****** 2026-01-01 00:59:48.100771 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:59:48.100777 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:59:48.100784 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:59:48.100790 | orchestrator | 2026-01-01 00:59:48.100795 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-01 00:59:48.100801 | orchestrator | Thursday 01 January 2026 00:57:35 +0000 (0:00:00.614) 0:00:05.503 ****** 2026-01-01 00:59:48.100811 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.100817 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.100822 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.100828 | orchestrator | 2026-01-01 00:59:48.100835 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-01 00:59:48.100841 | orchestrator | Thursday 01 January 2026 00:57:36 +0000 (0:00:00.392) 0:00:05.895 ****** 2026-01-01 00:59:48.100847 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:59:48.100854 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:59:48.100860 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:59:48.100866 | orchestrator | 2026-01-01 00:59:48.100872 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-01 00:59:48.100878 | orchestrator | Thursday 01 January 2026 00:57:38 +0000 (0:00:02.078) 0:00:07.974 ****** 2026-01-01 00:59:48.100885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 00:59:48.100891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 00:59:48.100897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 00:59:48.100904 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.100910 | orchestrator | 2026-01-01 00:59:48.100953 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-01 00:59:48.100962 | orchestrator | Thursday 01 January 2026 00:57:38 +0000 (0:00:00.593) 0:00:08.568 ****** 2026-01-01 00:59:48.100979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.100988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.100994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.101008 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101014 | orchestrator | 2026-01-01 00:59:48.101020 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-01 00:59:48.101026 | orchestrator | Thursday 01 January 2026 00:57:39 +0000 (0:00:00.740) 0:00:09.308 ****** 2026-01-01 00:59:48.101035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.101043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.101050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.101057 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101063 | orchestrator | 2026-01-01 00:59:48.101070 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-01 00:59:48.101076 | orchestrator | Thursday 01 January 2026 00:57:39 +0000 (0:00:00.394) 0:00:09.703 ****** 2026-01-01 00:59:48.101085 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '503ffd95f3aa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-01 00:57:36.646917', 'end': '2026-01-01 00:57:36.675443', 'delta': '0:00:00.028526', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['503ffd95f3aa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-01 00:59:48.101096 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '19049fab1981', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-01 00:57:37.248091', 'end': '2026-01-01 00:57:37.282186', 'delta': '0:00:00.034095', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['19049fab1981'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-01 00:59:48.101129 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '29c4232e80aa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-01 00:57:37.779270', 'end': '2026-01-01 00:57:37.821849', 'delta': '0:00:00.042579', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['29c4232e80aa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-01 00:59:48.101142 | orchestrator | 2026-01-01 00:59:48.101150 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-01 00:59:48.101157 | orchestrator | Thursday 01 January 2026 00:57:40 +0000 (0:00:00.205) 0:00:09.908 ****** 2026-01-01 00:59:48.101163 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.101169 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.101175 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.101181 | orchestrator | 2026-01-01 00:59:48.101188 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-01 00:59:48.101194 | orchestrator | Thursday 01 January 2026 00:57:40 +0000 (0:00:00.498) 0:00:10.407 ****** 2026-01-01 00:59:48.101201 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-01 00:59:48.101207 | orchestrator | 2026-01-01 00:59:48.101214 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-01 00:59:48.101220 | orchestrator | Thursday 01 January 2026 00:57:42 +0000 (0:00:02.168) 0:00:12.575 ****** 2026-01-01 00:59:48.101227 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101233 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101240 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101247 | orchestrator | 2026-01-01 00:59:48.101254 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-01 00:59:48.101261 | orchestrator | Thursday 01 January 2026 00:57:43 +0000 (0:00:00.328) 0:00:12.904 ****** 2026-01-01 00:59:48.101268 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101275 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101282 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101288 | orchestrator | 2026-01-01 00:59:48.101294 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 00:59:48.101301 | orchestrator | Thursday 01 January 2026 00:57:43 +0000 (0:00:00.415) 0:00:13.320 ****** 2026-01-01 00:59:48.101375 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101383 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101390 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101396 | orchestrator | 2026-01-01 00:59:48.101403 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-01 00:59:48.101409 | orchestrator | Thursday 01 January 2026 00:57:44 +0000 (0:00:00.560) 0:00:13.881 ****** 2026-01-01 00:59:48.101416 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.101424 | orchestrator | 2026-01-01 00:59:48.101430 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-01 00:59:48.101436 | orchestrator | Thursday 01 January 2026 00:57:44 +0000 (0:00:00.135) 0:00:14.016 ****** 2026-01-01 00:59:48.101443 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101449 | orchestrator | 2026-01-01 00:59:48.101456 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-01 00:59:48.101462 | orchestrator | Thursday 01 January 2026 00:57:44 +0000 (0:00:00.257) 0:00:14.273 ****** 2026-01-01 00:59:48.101468 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101475 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101481 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101488 | orchestrator | 2026-01-01 00:59:48.101555 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-01 00:59:48.101566 | orchestrator | Thursday 01 January 2026 00:57:44 +0000 (0:00:00.309) 0:00:14.583 ****** 2026-01-01 00:59:48.101572 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101579 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101585 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101600 | orchestrator | 2026-01-01 00:59:48.101606 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-01 00:59:48.101612 | orchestrator | Thursday 01 January 2026 00:57:45 +0000 (0:00:00.341) 0:00:14.924 ****** 2026-01-01 00:59:48.101618 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101624 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101630 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101637 | orchestrator | 2026-01-01 00:59:48.101643 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-01 00:59:48.101649 | orchestrator | Thursday 01 January 2026 00:57:45 +0000 (0:00:00.574) 0:00:15.499 ****** 2026-01-01 00:59:48.101655 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101661 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101667 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101673 | orchestrator | 2026-01-01 00:59:48.101679 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-01 00:59:48.101689 | orchestrator | Thursday 01 January 2026 00:57:45 +0000 (0:00:00.358) 0:00:15.858 ****** 2026-01-01 00:59:48.101694 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101700 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101706 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101712 | orchestrator | 2026-01-01 00:59:48.101719 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-01 00:59:48.101725 | orchestrator | Thursday 01 January 2026 00:57:46 +0000 (0:00:00.347) 0:00:16.205 ****** 2026-01-01 00:59:48.101731 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101737 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101744 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101774 | orchestrator | 2026-01-01 00:59:48.101781 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-01 00:59:48.101787 | orchestrator | Thursday 01 January 2026 00:57:46 +0000 (0:00:00.339) 0:00:16.545 ****** 2026-01-01 00:59:48.101793 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.101805 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.101812 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.101817 | orchestrator | 2026-01-01 00:59:48.101823 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-01 00:59:48.101829 | orchestrator | Thursday 01 January 2026 00:57:47 +0000 (0:00:00.544) 0:00:17.090 ****** 2026-01-01 00:59:48.101838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80', 'dm-uuid-LVM-SONYjeZN9GWLGHGRSqE9gmyFPBq2i8yFgW3LbyAuZAjtuiO9nidwiM15Zz4fgBgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99', 'dm-uuid-LVM-mNOe8BDEevDiieOx5pbseSYI92ft5O4Cmn3FTAdpuoRwUKtT432N6EvaDSN0TXAk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7', 'dm-uuid-LVM-fwhI3sFpUzo3WZy0vmQJML1CgRIk8v0dTREW6GKmoiy1t1hrsH0lOfVCAZbUSXx8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.101949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.101987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6KwxMN-8ZVb-ghoR-r4nZ-md12-SShe-JU1a8A', 'scsi-0QEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec', 'scsi-SQEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.101996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555', 'dm-uuid-LVM-ZxPtyy9M4L3rQExOpuVfQhUHxAUvkDO1GOfgxNBNkDww4BJylWY5eDdcKW6jqPiL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-buewbV-07X8-TKO4-J2HA-HHu5-FICq-7n30Rf', 'scsi-0QEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486', 'scsi-SQEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1', 'scsi-SQEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpSwcY-Tv7Z-ZbMx-2Azw-jH5E-jiVi-dVT9ng', 'scsi-0QEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0', 'scsi-SQEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K5uM70-yVwd-0zbA-82AT-IsIp-dFnv-jC7627', 'scsi-0QEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1', 'scsi-SQEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e', 'scsi-SQEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102212 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.102219 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.102226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4', 'dm-uuid-LVM-jcGAUv83n2cFw4SkjqywuBaM26nHu2nzrBARK8Q6NIOTfqlkkSnEZoKKYb5yRhJ3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce', 'dm-uuid-LVM-SdgupNqEp01AdaxqWCIDJUYHuls443yNnIKXlX0XsZXcY7Vqe0rjrVmyW8IbMBDs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-01 00:59:48.102318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CGBgaN-Wq2R-0G1i-7R7M-nuLJ-oM7J-JKWrs0', 'scsi-0QEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d', 'scsi-SQEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V8spcX-bDDY-3Im3-h7v3-31EX-z9EY-oSFxce', 'scsi-0QEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0', 'scsi-SQEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb', 'scsi-SQEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-01 00:59:48.102367 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.102374 | orchestrator | 2026-01-01 00:59:48.102380 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-01 00:59:48.102387 | orchestrator | Thursday 01 January 2026 00:57:47 +0000 (0:00:00.640) 0:00:17.730 ****** 2026-01-01 00:59:48.102395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80', 'dm-uuid-LVM-SONYjeZN9GWLGHGRSqE9gmyFPBq2i8yFgW3LbyAuZAjtuiO9nidwiM15Zz4fgBgm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99', 'dm-uuid-LVM-mNOe8BDEevDiieOx5pbseSYI92ft5O4Cmn3FTAdpuoRwUKtT432N6EvaDSN0TXAk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102423 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102461 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7', 'dm-uuid-LVM-fwhI3sFpUzo3WZy0vmQJML1CgRIk8v0dTREW6GKmoiy1t1hrsH0lOfVCAZbUSXx8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102485 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555', 'dm-uuid-LVM-ZxPtyy9M4L3rQExOpuVfQhUHxAUvkDO1GOfgxNBNkDww4BJylWY5eDdcKW6jqPiL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16', 'scsi-SQEMU_QEMU_HARDDISK_584cdc4f-ae6b-43db-b50c-3ccdc4dc2b91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--906f607d--f8ab--576d--9485--c345cfde3c80-osd--block--906f607d--f8ab--576d--9485--c345cfde3c80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6KwxMN-8ZVb-ghoR-r4nZ-md12-SShe-JU1a8A', 'scsi-0QEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec', 'scsi-SQEMU_QEMU_HARDDISK_144c3736-9bf7-4bb9-8a0f-53e5ef7f69ec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--27db58f4--0fe4--54a7--94bd--e6fe47c26f99-osd--block--27db58f4--0fe4--54a7--94bd--e6fe47c26f99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-buewbV-07X8-TKO4-J2HA-HHu5-FICq-7n30Rf', 'scsi-0QEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486', 'scsi-SQEMU_QEMU_HARDDISK_83035846-5651-49b4-8fb4-445ab40cb486'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1', 'scsi-SQEMU_QEMU_HARDDISK_37c29c30-7f08-4e38-a8a3-d8f285ca48d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102727 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e8d0b1d-23e0-4bff-ab16-8f19bc3575ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102748 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4f4651f5--78d1--505d--b741--249c77d228e7-osd--block--4f4651f5--78d1--505d--b741--249c77d228e7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cpSwcY-Tv7Z-ZbMx-2Azw-jH5E-jiVi-dVT9ng', 'scsi-0QEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0', 'scsi-SQEMU_QEMU_HARDDISK_9c7219fd-4a7f-4761-a2e7-de7bb29f84f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102755 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e5dc050d--fe50--5167--b35b--32fd51d3d555-osd--block--e5dc050d--fe50--5167--b35b--32fd51d3d555'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-K5uM70-yVwd-0zbA-82AT-IsIp-dFnv-jC7627', 'scsi-0QEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1', 'scsi-SQEMU_QEMU_HARDDISK_586b5bdd-05f0-424a-894b-f7859a2e54f1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102761 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.102767 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e', 'scsi-SQEMU_QEMU_HARDDISK_24720f9e-f089-4ccc-8129-9c8809670a8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102791 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.102798 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4', 'dm-uuid-LVM-jcGAUv83n2cFw4SkjqywuBaM26nHu2nzrBARK8Q6NIOTfqlkkSnEZoKKYb5yRhJ3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102805 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce', 'dm-uuid-LVM-SdgupNqEp01AdaxqWCIDJUYHuls443yNnIKXlX0XsZXcY7Vqe0rjrVmyW8IbMBDs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102824 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102864 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e715ee9-bafc-489c-bf52-84e91a8fed44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--21a5f53a--dc04--53e0--afe9--de267ba79db4-osd--block--21a5f53a--dc04--53e0--afe9--de267ba79db4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CGBgaN-Wq2R-0G1i-7R7M-nuLJ-oM7J-JKWrs0', 'scsi-0QEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d', 'scsi-SQEMU_QEMU_HARDDISK_b8d8b323-8d42-4427-9d99-f11bd160735d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b87804f1--5161--5843--851c--861f025ab6ce-osd--block--b87804f1--5161--5843--851c--861f025ab6ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V8spcX-bDDY-3Im3-h7v3-31EX-z9EY-oSFxce', 'scsi-0QEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0', 'scsi-SQEMU_QEMU_HARDDISK_831e5d56-835d-4e89-9dc9-0085220c39c0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb', 'scsi-SQEMU_QEMU_HARDDISK_a7505c52-a0e0-4d49-8d34-7b67910eacfb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-01-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-01 00:59:48.102940 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.102946 | orchestrator | 2026-01-01 00:59:48.102952 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-01 00:59:48.102959 | orchestrator | Thursday 01 January 2026 00:57:48 +0000 (0:00:00.684) 0:00:18.415 ****** 2026-01-01 00:59:48.102965 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.102971 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.102977 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.102984 | orchestrator | 2026-01-01 00:59:48.102990 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-01 00:59:48.102996 | orchestrator | Thursday 01 January 2026 00:57:49 +0000 (0:00:00.681) 0:00:19.097 ****** 2026-01-01 00:59:48.103002 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.103009 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.103016 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.103022 | orchestrator | 2026-01-01 00:59:48.103027 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 00:59:48.103033 | orchestrator | Thursday 01 January 2026 00:57:49 +0000 (0:00:00.528) 0:00:19.625 ****** 2026-01-01 00:59:48.103039 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.103046 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.103052 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.103058 | orchestrator | 2026-01-01 00:59:48.103064 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 00:59:48.103070 | orchestrator | Thursday 01 January 2026 00:57:50 +0000 (0:00:00.750) 0:00:20.376 ****** 2026-01-01 00:59:48.103077 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103083 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103088 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103094 | orchestrator | 2026-01-01 00:59:48.103099 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-01 00:59:48.103106 | orchestrator | Thursday 01 January 2026 00:57:50 +0000 (0:00:00.345) 0:00:20.721 ****** 2026-01-01 00:59:48.103112 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103118 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103125 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103131 | orchestrator | 2026-01-01 00:59:48.103137 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-01 00:59:48.103143 | orchestrator | Thursday 01 January 2026 00:57:51 +0000 (0:00:00.454) 0:00:21.176 ****** 2026-01-01 00:59:48.103149 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103155 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103160 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103166 | orchestrator | 2026-01-01 00:59:48.103172 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-01 00:59:48.103183 | orchestrator | Thursday 01 January 2026 00:57:51 +0000 (0:00:00.603) 0:00:21.779 ****** 2026-01-01 00:59:48.103189 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-01 00:59:48.103195 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-01 00:59:48.103201 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-01 00:59:48.103207 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-01 00:59:48.103213 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-01 00:59:48.103220 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-01 00:59:48.103227 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-01 00:59:48.103233 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-01 00:59:48.103239 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-01 00:59:48.103245 | orchestrator | 2026-01-01 00:59:48.103252 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-01 00:59:48.103258 | orchestrator | Thursday 01 January 2026 00:57:52 +0000 (0:00:00.900) 0:00:22.680 ****** 2026-01-01 00:59:48.103264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-01 00:59:48.103273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-01 00:59:48.103279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-01 00:59:48.103285 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103291 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-01 00:59:48.103297 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-01 00:59:48.103303 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-01 00:59:48.103310 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103316 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-01 00:59:48.103322 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-01 00:59:48.103328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-01 00:59:48.103334 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103340 | orchestrator | 2026-01-01 00:59:48.103347 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-01 00:59:48.103353 | orchestrator | Thursday 01 January 2026 00:57:53 +0000 (0:00:00.382) 0:00:23.063 ****** 2026-01-01 00:59:48.103360 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 00:59:48.103366 | orchestrator | 2026-01-01 00:59:48.103372 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-01 00:59:48.103380 | orchestrator | Thursday 01 January 2026 00:57:53 +0000 (0:00:00.744) 0:00:23.807 ****** 2026-01-01 00:59:48.103391 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103398 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103405 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103411 | orchestrator | 2026-01-01 00:59:48.103418 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-01 00:59:48.103429 | orchestrator | Thursday 01 January 2026 00:57:54 +0000 (0:00:00.325) 0:00:24.133 ****** 2026-01-01 00:59:48.103436 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103442 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103449 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103455 | orchestrator | 2026-01-01 00:59:48.103461 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-01 00:59:48.103467 | orchestrator | Thursday 01 January 2026 00:57:54 +0000 (0:00:00.335) 0:00:24.468 ****** 2026-01-01 00:59:48.103474 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103480 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103486 | orchestrator | skipping: [testbed-node-5] 2026-01-01 00:59:48.103492 | orchestrator | 2026-01-01 00:59:48.103498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-01 00:59:48.103510 | orchestrator | Thursday 01 January 2026 00:57:54 +0000 (0:00:00.354) 0:00:24.822 ****** 2026-01-01 00:59:48.103516 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.103630 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.103635 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.103638 | orchestrator | 2026-01-01 00:59:48.103642 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-01 00:59:48.103646 | orchestrator | Thursday 01 January 2026 00:57:55 +0000 (0:00:00.967) 0:00:25.789 ****** 2026-01-01 00:59:48.103650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:59:48.103654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:59:48.103657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:59:48.103661 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103665 | orchestrator | 2026-01-01 00:59:48.103669 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-01 00:59:48.103673 | orchestrator | Thursday 01 January 2026 00:57:56 +0000 (0:00:00.428) 0:00:26.218 ****** 2026-01-01 00:59:48.103676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:59:48.103680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:59:48.103684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:59:48.103687 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103691 | orchestrator | 2026-01-01 00:59:48.103695 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-01 00:59:48.103699 | orchestrator | Thursday 01 January 2026 00:57:56 +0000 (0:00:00.385) 0:00:26.603 ****** 2026-01-01 00:59:48.103702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-01 00:59:48.103706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-01 00:59:48.103710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-01 00:59:48.103713 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103717 | orchestrator | 2026-01-01 00:59:48.103721 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-01 00:59:48.103725 | orchestrator | Thursday 01 January 2026 00:57:57 +0000 (0:00:00.390) 0:00:26.994 ****** 2026-01-01 00:59:48.103728 | orchestrator | ok: [testbed-node-3] 2026-01-01 00:59:48.103732 | orchestrator | ok: [testbed-node-4] 2026-01-01 00:59:48.103736 | orchestrator | ok: [testbed-node-5] 2026-01-01 00:59:48.103739 | orchestrator | 2026-01-01 00:59:48.103743 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-01 00:59:48.103747 | orchestrator | Thursday 01 January 2026 00:57:57 +0000 (0:00:00.338) 0:00:27.333 ****** 2026-01-01 00:59:48.103751 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-01 00:59:48.103754 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-01 00:59:48.103758 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-01 00:59:48.103762 | orchestrator | 2026-01-01 00:59:48.103765 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-01 00:59:48.103769 | orchestrator | Thursday 01 January 2026 00:57:57 +0000 (0:00:00.524) 0:00:27.857 ****** 2026-01-01 00:59:48.103773 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:59:48.103777 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:59:48.103780 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:59:48.103784 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 00:59:48.103788 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 00:59:48.103792 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 00:59:48.103795 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 00:59:48.103799 | orchestrator | 2026-01-01 00:59:48.103808 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-01 00:59:48.103812 | orchestrator | Thursday 01 January 2026 00:57:59 +0000 (0:00:01.114) 0:00:28.972 ****** 2026-01-01 00:59:48.103815 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-01 00:59:48.103819 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-01 00:59:48.103823 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-01 00:59:48.103826 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-01 00:59:48.103830 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-01 00:59:48.103834 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-01 00:59:48.103844 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-01 00:59:48.103848 | orchestrator | 2026-01-01 00:59:48.103851 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-01 00:59:48.103859 | orchestrator | Thursday 01 January 2026 00:58:01 +0000 (0:00:02.112) 0:00:31.085 ****** 2026-01-01 00:59:48.103862 | orchestrator | skipping: [testbed-node-3] 2026-01-01 00:59:48.103866 | orchestrator | skipping: [testbed-node-4] 2026-01-01 00:59:48.103870 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-01 00:59:48.103874 | orchestrator | 2026-01-01 00:59:48.103877 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-01 00:59:48.103881 | orchestrator | Thursday 01 January 2026 00:58:01 +0000 (0:00:00.395) 0:00:31.481 ****** 2026-01-01 00:59:48.103886 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:59:48.103891 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:59:48.103895 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:59:48.103899 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:59:48.103903 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-01 00:59:48.103907 | orchestrator | 2026-01-01 00:59:48.103910 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-01 00:59:48.103914 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:48.785) 0:01:20.266 ****** 2026-01-01 00:59:48.103918 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103925 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103929 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103935 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103939 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103943 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-01 00:59:48.103947 | orchestrator | 2026-01-01 00:59:48.103950 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-01 00:59:48.103954 | orchestrator | Thursday 01 January 2026 00:59:14 +0000 (0:00:24.409) 0:01:44.676 ****** 2026-01-01 00:59:48.103958 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103961 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103965 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103969 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103972 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103976 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103980 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-01 00:59:48.103984 | orchestrator | 2026-01-01 00:59:48.103987 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-01 00:59:48.103991 | orchestrator | Thursday 01 January 2026 00:59:27 +0000 (0:00:12.382) 0:01:57.058 ****** 2026-01-01 00:59:48.103995 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.103998 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:59:48.104002 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:59:48.104006 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.104018 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:59:48.104025 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:59:48.104029 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.104035 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:59:48.104039 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:59:48.104043 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.104047 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:59:48.104050 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:59:48.104054 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.104058 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:59:48.104062 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:59:48.104065 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-01 00:59:48.104069 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-01 00:59:48.104073 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-01 00:59:48.104077 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-01 00:59:48.104080 | orchestrator | 2026-01-01 00:59:48.104084 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 00:59:48.104088 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-01 00:59:48.104093 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-01 00:59:48.104100 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-01 00:59:48.104104 | orchestrator | 2026-01-01 00:59:48.104108 | orchestrator | 2026-01-01 00:59:48.104112 | orchestrator | 2026-01-01 00:59:48.104116 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 00:59:48.104120 | orchestrator | Thursday 01 January 2026 00:59:46 +0000 (0:00:18.839) 0:02:15.898 ****** 2026-01-01 00:59:48.104123 | orchestrator | =============================================================================== 2026-01-01 00:59:48.104127 | orchestrator | create openstack pool(s) ----------------------------------------------- 48.79s 2026-01-01 00:59:48.104131 | orchestrator | generate keys ---------------------------------------------------------- 24.41s 2026-01-01 00:59:48.104134 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.84s 2026-01-01 00:59:48.104138 | orchestrator | get keys from monitors ------------------------------------------------- 12.38s 2026-01-01 00:59:48.104142 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.17s 2026-01-01 00:59:48.104146 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.11s 2026-01-01 00:59:48.104149 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.08s 2026-01-01 00:59:48.104153 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.11s 2026-01-01 00:59:48.104157 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.97s 2026-01-01 00:59:48.104161 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.90s 2026-01-01 00:59:48.104165 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-01-01 00:59:48.104168 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.75s 2026-01-01 00:59:48.104172 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2026-01-01 00:59:48.104176 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.74s 2026-01-01 00:59:48.104180 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.71s 2026-01-01 00:59:48.104183 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.70s 2026-01-01 00:59:48.104187 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2026-01-01 00:59:48.104191 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-01-01 00:59:48.104195 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.64s 2026-01-01 00:59:48.104198 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2026-01-01 00:59:48.104202 | orchestrator | 2026-01-01 00:59:48 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 00:59:48.104206 | orchestrator | 2026-01-01 00:59:48 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:48.104210 | orchestrator | 2026-01-01 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:51.143211 | orchestrator | 2026-01-01 00:59:51 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 00:59:51.147156 | orchestrator | 2026-01-01 00:59:51 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 00:59:51.150224 | orchestrator | 2026-01-01 00:59:51 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 00:59:51.154130 | orchestrator | 2026-01-01 00:59:51 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 00:59:51.156755 | orchestrator | 2026-01-01 00:59:51 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:51.157310 | orchestrator | 2026-01-01 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:54.201762 | orchestrator | 2026-01-01 00:59:54 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 00:59:54.206375 | orchestrator | 2026-01-01 00:59:54 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 00:59:54.209140 | orchestrator | 2026-01-01 00:59:54 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 00:59:54.216410 | orchestrator | 2026-01-01 00:59:54 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 00:59:54.219055 | orchestrator | 2026-01-01 00:59:54 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:54.219105 | orchestrator | 2026-01-01 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 00:59:57.272986 | orchestrator | 2026-01-01 00:59:57 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 00:59:57.274210 | orchestrator | 2026-01-01 00:59:57 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 00:59:57.277198 | orchestrator | 2026-01-01 00:59:57 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 00:59:57.279025 | orchestrator | 2026-01-01 00:59:57 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 00:59:57.282334 | orchestrator | 2026-01-01 00:59:57 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 00:59:57.282394 | orchestrator | 2026-01-01 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:00.332682 | orchestrator | 2026-01-01 01:00:00 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:00.335334 | orchestrator | 2026-01-01 01:00:00 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:00.336641 | orchestrator | 2026-01-01 01:00:00 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:00.338175 | orchestrator | 2026-01-01 01:00:00 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:00.339695 | orchestrator | 2026-01-01 01:00:00 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:00.339746 | orchestrator | 2026-01-01 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:03.392225 | orchestrator | 2026-01-01 01:00:03 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:03.393264 | orchestrator | 2026-01-01 01:00:03 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:03.396464 | orchestrator | 2026-01-01 01:00:03 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:03.399465 | orchestrator | 2026-01-01 01:00:03 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:03.401743 | orchestrator | 2026-01-01 01:00:03 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:03.401786 | orchestrator | 2026-01-01 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:06.460405 | orchestrator | 2026-01-01 01:00:06 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:06.463581 | orchestrator | 2026-01-01 01:00:06 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:06.466237 | orchestrator | 2026-01-01 01:00:06 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:06.468738 | orchestrator | 2026-01-01 01:00:06 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:06.470849 | orchestrator | 2026-01-01 01:00:06 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:06.471176 | orchestrator | 2026-01-01 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:09.523145 | orchestrator | 2026-01-01 01:00:09 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:09.524339 | orchestrator | 2026-01-01 01:00:09 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:09.526181 | orchestrator | 2026-01-01 01:00:09 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:09.527711 | orchestrator | 2026-01-01 01:00:09 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:09.529496 | orchestrator | 2026-01-01 01:00:09 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:09.529820 | orchestrator | 2026-01-01 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:12.594349 | orchestrator | 2026-01-01 01:00:12 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:12.596558 | orchestrator | 2026-01-01 01:00:12 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:12.598441 | orchestrator | 2026-01-01 01:00:12 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:12.600340 | orchestrator | 2026-01-01 01:00:12 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:12.602195 | orchestrator | 2026-01-01 01:00:12 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:12.602249 | orchestrator | 2026-01-01 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:15.651010 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:15.653928 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:15.656272 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:15.658262 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:15.660453 | orchestrator | 2026-01-01 01:00:15 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:15.660565 | orchestrator | 2026-01-01 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:18.712692 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:18.716807 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:18.720211 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:18.722279 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:18.725127 | orchestrator | 2026-01-01 01:00:18 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:18.725155 | orchestrator | 2026-01-01 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:21.765555 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:21.768467 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:21.771245 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:21.772565 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:21.774918 | orchestrator | 2026-01-01 01:00:21 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:21.775180 | orchestrator | 2026-01-01 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:24.823607 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:24.825219 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:24.827206 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:24.829955 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state STARTED 2026-01-01 01:00:24.832559 | orchestrator | 2026-01-01 01:00:24 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state STARTED 2026-01-01 01:00:24.832620 | orchestrator | 2026-01-01 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:27.876607 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:27.880622 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:27.884665 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:27.887163 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task 6c00045f-2c0a-46fd-8c52-27840abff7a3 is in state SUCCESS 2026-01-01 01:00:27.890605 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:27.895182 | orchestrator | 2026-01-01 01:00:27 | INFO  | Task 0789b926-23a5-4394-bec1-27ba3841eaf3 is in state SUCCESS 2026-01-01 01:00:27.900186 | orchestrator | 2026-01-01 01:00:27.900236 | orchestrator | 2026-01-01 01:00:27.900249 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-01 01:00:27.900262 | orchestrator | 2026-01-01 01:00:27.900273 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-01 01:00:27.900285 | orchestrator | Thursday 01 January 2026 00:59:51 +0000 (0:00:00.183) 0:00:00.183 ****** 2026-01-01 01:00:27.900297 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-01 01:00:27.900309 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900320 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900332 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:00:27.900343 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900354 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-01 01:00:27.900365 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-01 01:00:27.900375 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:00:27.900386 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-01 01:00:27.900397 | orchestrator | 2026-01-01 01:00:27.900408 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-01 01:00:27.900444 | orchestrator | Thursday 01 January 2026 00:59:56 +0000 (0:00:04.877) 0:00:05.061 ****** 2026-01-01 01:00:27.900456 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-01 01:00:27.900467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900478 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900488 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:00:27.900546 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900557 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-01 01:00:27.900568 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-01 01:00:27.900579 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:00:27.900590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-01 01:00:27.900601 | orchestrator | 2026-01-01 01:00:27.900612 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-01 01:00:27.900623 | orchestrator | Thursday 01 January 2026 01:00:00 +0000 (0:00:04.497) 0:00:09.558 ****** 2026-01-01 01:00:27.900635 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-01 01:00:27.900646 | orchestrator | 2026-01-01 01:00:27.900657 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-01 01:00:27.900668 | orchestrator | Thursday 01 January 2026 01:00:01 +0000 (0:00:01.080) 0:00:10.639 ****** 2026-01-01 01:00:27.900679 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-01 01:00:27.900689 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900700 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900711 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:00:27.900722 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.900735 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-01 01:00:27.900749 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-01 01:00:27.900762 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:00:27.900775 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-01 01:00:27.900787 | orchestrator | 2026-01-01 01:00:27.900800 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-01 01:00:27.900812 | orchestrator | Thursday 01 January 2026 01:00:16 +0000 (0:00:14.478) 0:00:25.117 ****** 2026-01-01 01:00:27.900841 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-01 01:00:27.900855 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-01 01:00:27.900869 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-01 01:00:27.900883 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-01 01:00:27.900912 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-01 01:00:27.900926 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-01 01:00:27.900939 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-01 01:00:27.900960 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-01 01:00:27.900973 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-01 01:00:27.900986 | orchestrator | 2026-01-01 01:00:27.901000 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-01 01:00:27.901014 | orchestrator | Thursday 01 January 2026 01:00:19 +0000 (0:00:03.444) 0:00:28.561 ****** 2026-01-01 01:00:27.901027 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-01 01:00:27.901040 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.901054 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.901067 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-01 01:00:27.901080 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-01 01:00:27.901093 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-01 01:00:27.901107 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-01 01:00:27.901118 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-01 01:00:27.901129 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-01 01:00:27.901140 | orchestrator | 2026-01-01 01:00:27.901151 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:00:27.901162 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:00:27.901174 | orchestrator | 2026-01-01 01:00:27.901185 | orchestrator | 2026-01-01 01:00:27.901196 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:00:27.901207 | orchestrator | Thursday 01 January 2026 01:00:27 +0000 (0:00:07.325) 0:00:35.887 ****** 2026-01-01 01:00:27.901217 | orchestrator | =============================================================================== 2026-01-01 01:00:27.901228 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.48s 2026-01-01 01:00:27.901239 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.33s 2026-01-01 01:00:27.901250 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.88s 2026-01-01 01:00:27.901261 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.50s 2026-01-01 01:00:27.901271 | orchestrator | Check if target directories exist --------------------------------------- 3.44s 2026-01-01 01:00:27.901282 | orchestrator | Create share directory -------------------------------------------------- 1.08s 2026-01-01 01:00:27.901293 | orchestrator | 2026-01-01 01:00:27.901303 | orchestrator | 2026-01-01 01:00:27.901314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:00:27.901325 | orchestrator | 2026-01-01 01:00:27.901336 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:00:27.901346 | orchestrator | Thursday 01 January 2026 00:58:39 +0000 (0:00:00.270) 0:00:00.270 ****** 2026-01-01 01:00:27.901357 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.901368 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.901379 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.901390 | orchestrator | 2026-01-01 01:00:27.901401 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:00:27.901412 | orchestrator | Thursday 01 January 2026 00:58:39 +0000 (0:00:00.367) 0:00:00.637 ****** 2026-01-01 01:00:27.901422 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-01 01:00:27.901434 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-01 01:00:27.901445 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-01 01:00:27.901455 | orchestrator | 2026-01-01 01:00:27.901466 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-01 01:00:27.901484 | orchestrator | 2026-01-01 01:00:27.901556 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:00:27.901569 | orchestrator | Thursday 01 January 2026 00:58:40 +0000 (0:00:00.509) 0:00:01.147 ****** 2026-01-01 01:00:27.901579 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:00:27.901590 | orchestrator | 2026-01-01 01:00:27.901601 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-01 01:00:27.901612 | orchestrator | Thursday 01 January 2026 00:58:40 +0000 (0:00:00.706) 0:00:01.854 ****** 2026-01-01 01:00:27.901649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.901673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.901705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.901718 | orchestrator | 2026-01-01 01:00:27.901729 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-01 01:00:27.901741 | orchestrator | Thursday 01 January 2026 00:58:42 +0000 (0:00:01.214) 0:00:03.068 ****** 2026-01-01 01:00:27.901752 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.901763 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.901773 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.901784 | orchestrator | 2026-01-01 01:00:27.901795 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:00:27.901806 | orchestrator | Thursday 01 January 2026 00:58:42 +0000 (0:00:00.485) 0:00:03.553 ****** 2026-01-01 01:00:27.901817 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-01 01:00:27.901827 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-01 01:00:27.901846 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-01 01:00:27.901856 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-01 01:00:27.901867 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-01 01:00:27.901878 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-01 01:00:27.901889 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-01 01:00:27.901900 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-01 01:00:27.901910 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-01 01:00:27.901921 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-01 01:00:27.901932 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-01 01:00:27.901943 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-01 01:00:27.901954 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-01 01:00:27.901965 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-01 01:00:27.901975 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-01 01:00:27.901986 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-01 01:00:27.901997 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-01 01:00:27.902013 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-01 01:00:27.902071 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-01 01:00:27.902081 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-01 01:00:27.902091 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-01 01:00:27.902100 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-01 01:00:27.902116 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-01 01:00:27.902126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-01 01:00:27.902138 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-01 01:00:27.902150 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-01 01:00:27.902159 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-01 01:00:27.902169 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-01 01:00:27.902179 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-01 01:00:27.902189 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-01 01:00:27.902198 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-01 01:00:27.902208 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-01 01:00:27.902217 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-01 01:00:27.902234 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-01 01:00:27.902244 | orchestrator | 2026-01-01 01:00:27.902254 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.902263 | orchestrator | Thursday 01 January 2026 00:58:43 +0000 (0:00:00.796) 0:00:04.350 ****** 2026-01-01 01:00:27.902273 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.902283 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.902292 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.902302 | orchestrator | 2026-01-01 01:00:27.902311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.902321 | orchestrator | Thursday 01 January 2026 00:58:43 +0000 (0:00:00.287) 0:00:04.638 ****** 2026-01-01 01:00:27.902330 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902340 | orchestrator | 2026-01-01 01:00:27.902350 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.902359 | orchestrator | Thursday 01 January 2026 00:58:43 +0000 (0:00:00.174) 0:00:04.812 ****** 2026-01-01 01:00:27.902369 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902379 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.902388 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.902398 | orchestrator | 2026-01-01 01:00:27.902407 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.902417 | orchestrator | Thursday 01 January 2026 00:58:44 +0000 (0:00:00.551) 0:00:05.363 ****** 2026-01-01 01:00:27.902426 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.902436 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.902446 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.902455 | orchestrator | 2026-01-01 01:00:27.902464 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.902474 | orchestrator | Thursday 01 January 2026 00:58:44 +0000 (0:00:00.328) 0:00:05.692 ****** 2026-01-01 01:00:27.902484 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902514 | orchestrator | 2026-01-01 01:00:27.902525 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.902535 | orchestrator | Thursday 01 January 2026 00:58:44 +0000 (0:00:00.128) 0:00:05.820 ****** 2026-01-01 01:00:27.902544 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902554 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.902563 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.902572 | orchestrator | 2026-01-01 01:00:27.902582 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.902591 | orchestrator | Thursday 01 January 2026 00:58:45 +0000 (0:00:00.361) 0:00:06.181 ****** 2026-01-01 01:00:27.902601 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.902610 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.902620 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.902629 | orchestrator | 2026-01-01 01:00:27.902639 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.902653 | orchestrator | Thursday 01 January 2026 00:58:45 +0000 (0:00:00.354) 0:00:06.535 ****** 2026-01-01 01:00:27.902663 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902673 | orchestrator | 2026-01-01 01:00:27.902683 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.902692 | orchestrator | Thursday 01 January 2026 00:58:45 +0000 (0:00:00.160) 0:00:06.696 ****** 2026-01-01 01:00:27.902702 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902711 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.902721 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.902730 | orchestrator | 2026-01-01 01:00:27.902740 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.902763 | orchestrator | Thursday 01 January 2026 00:58:46 +0000 (0:00:00.744) 0:00:07.441 ****** 2026-01-01 01:00:27.902773 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.902783 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.902793 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.902802 | orchestrator | 2026-01-01 01:00:27.902812 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.902821 | orchestrator | Thursday 01 January 2026 00:58:46 +0000 (0:00:00.466) 0:00:07.908 ****** 2026-01-01 01:00:27.902831 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902840 | orchestrator | 2026-01-01 01:00:27.902850 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.902860 | orchestrator | Thursday 01 January 2026 00:58:47 +0000 (0:00:00.165) 0:00:08.073 ****** 2026-01-01 01:00:27.902869 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902879 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.902888 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.902897 | orchestrator | 2026-01-01 01:00:27.902907 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.902916 | orchestrator | Thursday 01 January 2026 00:58:47 +0000 (0:00:00.323) 0:00:08.396 ****** 2026-01-01 01:00:27.902926 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.902936 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.902945 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.902954 | orchestrator | 2026-01-01 01:00:27.902964 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.902973 | orchestrator | Thursday 01 January 2026 00:58:47 +0000 (0:00:00.542) 0:00:08.939 ****** 2026-01-01 01:00:27.902983 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.902992 | orchestrator | 2026-01-01 01:00:27.903002 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.903011 | orchestrator | Thursday 01 January 2026 00:58:48 +0000 (0:00:00.156) 0:00:09.095 ****** 2026-01-01 01:00:27.903021 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903031 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.903040 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.903050 | orchestrator | 2026-01-01 01:00:27.903059 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.903069 | orchestrator | Thursday 01 January 2026 00:58:48 +0000 (0:00:00.302) 0:00:09.398 ****** 2026-01-01 01:00:27.903078 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.903088 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.903098 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.903107 | orchestrator | 2026-01-01 01:00:27.903116 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.903126 | orchestrator | Thursday 01 January 2026 00:58:48 +0000 (0:00:00.351) 0:00:09.749 ****** 2026-01-01 01:00:27.903135 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903145 | orchestrator | 2026-01-01 01:00:27.903155 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.903164 | orchestrator | Thursday 01 January 2026 00:58:48 +0000 (0:00:00.139) 0:00:09.888 ****** 2026-01-01 01:00:27.903174 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903183 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.903193 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.903202 | orchestrator | 2026-01-01 01:00:27.903211 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.903221 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.385) 0:00:10.274 ****** 2026-01-01 01:00:27.903230 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.903240 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.903249 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.903258 | orchestrator | 2026-01-01 01:00:27.903268 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.903277 | orchestrator | Thursday 01 January 2026 00:58:49 +0000 (0:00:00.651) 0:00:10.926 ****** 2026-01-01 01:00:27.903293 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903302 | orchestrator | 2026-01-01 01:00:27.903312 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.903321 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.170) 0:00:11.096 ****** 2026-01-01 01:00:27.903331 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903340 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.903350 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.903359 | orchestrator | 2026-01-01 01:00:27.903369 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.903378 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.371) 0:00:11.468 ****** 2026-01-01 01:00:27.903388 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.903397 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.903407 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.903416 | orchestrator | 2026-01-01 01:00:27.903425 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.903435 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.358) 0:00:11.826 ****** 2026-01-01 01:00:27.903445 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903454 | orchestrator | 2026-01-01 01:00:27.903463 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.903473 | orchestrator | Thursday 01 January 2026 00:58:50 +0000 (0:00:00.135) 0:00:11.962 ****** 2026-01-01 01:00:27.903482 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903512 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.903529 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.903546 | orchestrator | 2026-01-01 01:00:27.903576 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.903588 | orchestrator | Thursday 01 January 2026 00:58:51 +0000 (0:00:00.306) 0:00:12.269 ****** 2026-01-01 01:00:27.903598 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.903607 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.903617 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.903626 | orchestrator | 2026-01-01 01:00:27.903636 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.903645 | orchestrator | Thursday 01 January 2026 00:58:51 +0000 (0:00:00.589) 0:00:12.859 ****** 2026-01-01 01:00:27.903655 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903664 | orchestrator | 2026-01-01 01:00:27.903680 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.903690 | orchestrator | Thursday 01 January 2026 00:58:52 +0000 (0:00:00.155) 0:00:13.014 ****** 2026-01-01 01:00:27.903700 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903709 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.903719 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.903729 | orchestrator | 2026-01-01 01:00:27.903738 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-01 01:00:27.903748 | orchestrator | Thursday 01 January 2026 00:58:52 +0000 (0:00:00.344) 0:00:13.359 ****** 2026-01-01 01:00:27.903758 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:27.903767 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:27.903777 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:27.903786 | orchestrator | 2026-01-01 01:00:27.903796 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-01 01:00:27.903806 | orchestrator | Thursday 01 January 2026 00:58:52 +0000 (0:00:00.352) 0:00:13.711 ****** 2026-01-01 01:00:27.903816 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903854 | orchestrator | 2026-01-01 01:00:27.903865 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-01 01:00:27.903875 | orchestrator | Thursday 01 January 2026 00:58:52 +0000 (0:00:00.144) 0:00:13.856 ****** 2026-01-01 01:00:27.903884 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.903894 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.903911 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.903921 | orchestrator | 2026-01-01 01:00:27.903931 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-01 01:00:27.903940 | orchestrator | Thursday 01 January 2026 00:58:53 +0000 (0:00:00.541) 0:00:14.397 ****** 2026-01-01 01:00:27.903950 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:00:27.903959 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:00:27.903969 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:00:27.903979 | orchestrator | 2026-01-01 01:00:27.903990 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-01 01:00:27.904006 | orchestrator | Thursday 01 January 2026 00:58:55 +0000 (0:00:01.844) 0:00:16.241 ****** 2026-01-01 01:00:27.904019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-01 01:00:27.904029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-01 01:00:27.904039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-01 01:00:27.904048 | orchestrator | 2026-01-01 01:00:27.904058 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-01 01:00:27.904067 | orchestrator | Thursday 01 January 2026 00:58:57 +0000 (0:00:01.876) 0:00:18.117 ****** 2026-01-01 01:00:27.904077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-01 01:00:27.904089 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-01 01:00:27.904105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-01 01:00:27.904116 | orchestrator | 2026-01-01 01:00:27.904125 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-01 01:00:27.904135 | orchestrator | Thursday 01 January 2026 00:58:59 +0000 (0:00:02.292) 0:00:20.410 ****** 2026-01-01 01:00:27.904144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-01 01:00:27.904154 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-01 01:00:27.904163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-01 01:00:27.904173 | orchestrator | 2026-01-01 01:00:27.904182 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-01 01:00:27.904192 | orchestrator | Thursday 01 January 2026 00:59:01 +0000 (0:00:02.286) 0:00:22.696 ****** 2026-01-01 01:00:27.904201 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.904211 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.904220 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.904229 | orchestrator | 2026-01-01 01:00:27.904239 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-01 01:00:27.904249 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:00.306) 0:00:23.002 ****** 2026-01-01 01:00:27.904259 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.904268 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.904278 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.904287 | orchestrator | 2026-01-01 01:00:27.904297 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:00:27.904306 | orchestrator | Thursday 01 January 2026 00:59:02 +0000 (0:00:00.300) 0:00:23.303 ****** 2026-01-01 01:00:27.904316 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:00:27.904325 | orchestrator | 2026-01-01 01:00:27.904335 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-01 01:00:27.904349 | orchestrator | Thursday 01 January 2026 00:59:03 +0000 (0:00:00.872) 0:00:24.176 ****** 2026-01-01 01:00:27.904371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.904395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.904421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.904433 | orchestrator | 2026-01-01 01:00:27.904443 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-01 01:00:27.904453 | orchestrator | Thursday 01 January 2026 00:59:04 +0000 (0:00:01.541) 0:00:25.717 ****** 2026-01-01 01:00:27.904475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:00:27.904549 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.904563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:00:27.904574 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.904599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:00:27.904613 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.904622 | orchestrator | 2026-01-01 01:00:27.904630 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-01 01:00:27.904638 | orchestrator | Thursday 01 January 2026 00:59:05 +0000 (0:00:00.667) 0:00:26.385 ****** 2026-01-01 01:00:27.904646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:00:27.904655 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.904675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:00:27.904689 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.904698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-01 01:00:27.904707 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.904720 | orchestrator | 2026-01-01 01:00:27.904728 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-01 01:00:27.904736 | orchestrator | Thursday 01 January 2026 00:59:06 +0000 (0:00:00.849) 0:00:27.234 ****** 2026-01-01 01:00:27.904755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.904765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.904793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-01 01:00:27.904803 | orchestrator | 2026-01-01 01:00:27.904811 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:00:27.904819 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:01.841) 0:00:29.076 ****** 2026-01-01 01:00:27.904827 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:00:27.904835 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:00:27.904843 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:00:27.904851 | orchestrator | 2026-01-01 01:00:27.904859 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-01 01:00:27.904867 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:00.352) 0:00:29.429 ****** 2026-01-01 01:00:27.904874 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:00:27.904883 | orchestrator | 2026-01-01 01:00:27.904891 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-01 01:00:27.904899 | orchestrator | Thursday 01 January 2026 00:59:08 +0000 (0:00:00.520) 0:00:29.949 ****** 2026-01-01 01:00:27.904907 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:00:27.904914 | orchestrator | 2026-01-01 01:00:27.904922 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-01 01:00:27.904930 | orchestrator | Thursday 01 January 2026 00:59:11 +0000 (0:00:02.727) 0:00:32.677 ****** 2026-01-01 01:00:27.904944 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:00:27.904952 | orchestrator | 2026-01-01 01:00:27.904960 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-01 01:00:27.904968 | orchestrator | Thursday 01 January 2026 00:59:14 +0000 (0:00:02.922) 0:00:35.599 ****** 2026-01-01 01:00:27.904976 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:00:27.904984 | orchestrator | 2026-01-01 01:00:27.904992 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-01 01:00:27.905000 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:16.768) 0:00:52.368 ****** 2026-01-01 01:00:27.905007 | orchestrator | 2026-01-01 01:00:27.905015 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-01 01:00:27.905023 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:00.065) 0:00:52.433 ****** 2026-01-01 01:00:27.905031 | orchestrator | 2026-01-01 01:00:27.905039 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-01 01:00:27.905047 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:00.065) 0:00:52.499 ****** 2026-01-01 01:00:27.905055 | orchestrator | 2026-01-01 01:00:27.905063 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-01 01:00:27.905071 | orchestrator | Thursday 01 January 2026 00:59:31 +0000 (0:00:00.068) 0:00:52.567 ****** 2026-01-01 01:00:27.905079 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:00:27.905087 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:00:27.905095 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:00:27.905102 | orchestrator | 2026-01-01 01:00:27.905110 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:00:27.905123 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-01 01:00:27.905131 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-01 01:00:27.905140 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-01 01:00:27.905148 | orchestrator | 2026-01-01 01:00:27.905156 | orchestrator | 2026-01-01 01:00:27.905169 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:00:27.905177 | orchestrator | Thursday 01 January 2026 01:00:24 +0000 (0:00:53.046) 0:01:45.614 ****** 2026-01-01 01:00:27.905185 | orchestrator | =============================================================================== 2026-01-01 01:00:27.905193 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.05s 2026-01-01 01:00:27.905201 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.77s 2026-01-01 01:00:27.905209 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.92s 2026-01-01 01:00:27.905221 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.73s 2026-01-01 01:00:27.905234 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.29s 2026-01-01 01:00:27.905247 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.29s 2026-01-01 01:00:27.905259 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.88s 2026-01-01 01:00:27.905273 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.84s 2026-01-01 01:00:27.905285 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.84s 2026-01-01 01:00:27.905293 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.54s 2026-01-01 01:00:27.905301 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.21s 2026-01-01 01:00:27.905308 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.87s 2026-01-01 01:00:27.905316 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-01-01 01:00:27.905329 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-01-01 01:00:27.905337 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.74s 2026-01-01 01:00:27.905345 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-01-01 01:00:27.905353 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-01-01 01:00:27.905360 | orchestrator | horizon : Update policy file name --------------------------------------- 0.65s 2026-01-01 01:00:27.905368 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-01-01 01:00:27.905376 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-01-01 01:00:27.905384 | orchestrator | 2026-01-01 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:30.964041 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:30.965880 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:30.968752 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:30.971116 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:30.972847 | orchestrator | 2026-01-01 01:00:30 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:30.972893 | orchestrator | 2026-01-01 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:34.029148 | orchestrator | 2026-01-01 01:00:34 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:34.032360 | orchestrator | 2026-01-01 01:00:34 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:34.034545 | orchestrator | 2026-01-01 01:00:34 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:34.038798 | orchestrator | 2026-01-01 01:00:34 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:34.039415 | orchestrator | 2026-01-01 01:00:34 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:34.039481 | orchestrator | 2026-01-01 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:37.092553 | orchestrator | 2026-01-01 01:00:37 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:37.094995 | orchestrator | 2026-01-01 01:00:37 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:37.097642 | orchestrator | 2026-01-01 01:00:37 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:37.100981 | orchestrator | 2026-01-01 01:00:37 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:37.103461 | orchestrator | 2026-01-01 01:00:37 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:37.103768 | orchestrator | 2026-01-01 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:40.158756 | orchestrator | 2026-01-01 01:00:40 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:40.161241 | orchestrator | 2026-01-01 01:00:40 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:40.164021 | orchestrator | 2026-01-01 01:00:40 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:40.167127 | orchestrator | 2026-01-01 01:00:40 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:40.168716 | orchestrator | 2026-01-01 01:00:40 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:40.168770 | orchestrator | 2026-01-01 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:43.213852 | orchestrator | 2026-01-01 01:00:43 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:43.214728 | orchestrator | 2026-01-01 01:00:43 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:43.217207 | orchestrator | 2026-01-01 01:00:43 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:43.218662 | orchestrator | 2026-01-01 01:00:43 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:43.219725 | orchestrator | 2026-01-01 01:00:43 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:43.219743 | orchestrator | 2026-01-01 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:46.273800 | orchestrator | 2026-01-01 01:00:46 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:46.275644 | orchestrator | 2026-01-01 01:00:46 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:46.277516 | orchestrator | 2026-01-01 01:00:46 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:46.279141 | orchestrator | 2026-01-01 01:00:46 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:46.280898 | orchestrator | 2026-01-01 01:00:46 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:46.280938 | orchestrator | 2026-01-01 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:49.329924 | orchestrator | 2026-01-01 01:00:49 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:49.330880 | orchestrator | 2026-01-01 01:00:49 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:49.332326 | orchestrator | 2026-01-01 01:00:49 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:49.333721 | orchestrator | 2026-01-01 01:00:49 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:49.335267 | orchestrator | 2026-01-01 01:00:49 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:49.335308 | orchestrator | 2026-01-01 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:52.385315 | orchestrator | 2026-01-01 01:00:52 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:52.387050 | orchestrator | 2026-01-01 01:00:52 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:52.389132 | orchestrator | 2026-01-01 01:00:52 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:52.390596 | orchestrator | 2026-01-01 01:00:52 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:52.392005 | orchestrator | 2026-01-01 01:00:52 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:52.392042 | orchestrator | 2026-01-01 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:55.435276 | orchestrator | 2026-01-01 01:00:55 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state STARTED 2026-01-01 01:00:55.437522 | orchestrator | 2026-01-01 01:00:55 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state STARTED 2026-01-01 01:00:55.439732 | orchestrator | 2026-01-01 01:00:55 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:55.442092 | orchestrator | 2026-01-01 01:00:55 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:55.444004 | orchestrator | 2026-01-01 01:00:55 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:55.444077 | orchestrator | 2026-01-01 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:00:58.493483 | orchestrator | 2026-01-01 01:00:58.493551 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task f08008c8-a53c-46c8-8bf0-d1e2d29e7e9a is in state SUCCESS 2026-01-01 01:00:58.493934 | orchestrator | 2026-01-01 01:00:58.493943 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:00:58.493947 | orchestrator | 2026-01-01 01:00:58.493951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:00:58.493955 | orchestrator | Thursday 01 January 2026 00:59:48 +0000 (0:00:00.284) 0:00:00.284 ****** 2026-01-01 01:00:58.493959 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:58.493964 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:58.493968 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:58.493972 | orchestrator | 2026-01-01 01:00:58.493976 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:00:58.493980 | orchestrator | Thursday 01 January 2026 00:59:49 +0000 (0:00:00.367) 0:00:00.651 ****** 2026-01-01 01:00:58.493984 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-01 01:00:58.493987 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-01 01:00:58.493991 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-01 01:00:58.493995 | orchestrator | 2026-01-01 01:00:58.493999 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-01 01:00:58.494003 | orchestrator | 2026-01-01 01:00:58.494006 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-01 01:00:58.494010 | orchestrator | Thursday 01 January 2026 00:59:49 +0000 (0:00:00.542) 0:00:01.194 ****** 2026-01-01 01:00:58.494039 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:00:58.494045 | orchestrator | 2026-01-01 01:00:58.494049 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-01 01:00:58.494053 | orchestrator | Thursday 01 January 2026 00:59:50 +0000 (0:00:00.610) 0:00:01.804 ****** 2026-01-01 01:00:58.494057 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (5 retries left). 2026-01-01 01:00:58.494060 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (4 retries left). 2026-01-01 01:00:58.494064 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (3 retries left). 2026-01-01 01:00:58.494068 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (2 retries left). 2026-01-01 01:00:58.494072 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (1 retries left). 2026-01-01 01:00:58.494106 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229254.7584932-3261-219415356080836/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229254.7584932-3261-219415356080836/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229254.7584932-3261-219415356080836/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_2ah8bdpn/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_2ah8bdpn/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_2ah8bdpn/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_2ah8bdpn/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_2ah8bdpn/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:00:58.494129 | orchestrator | 2026-01-01 01:00:58.494134 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:00:58.494140 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:00:58.494146 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:00:58.494151 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:00:58.494155 | orchestrator | 2026-01-01 01:00:58.494159 | orchestrator | 2026-01-01 01:00:58.494163 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:00:58.494167 | orchestrator | Thursday 01 January 2026 01:00:56 +0000 (0:01:05.743) 0:01:07.548 ****** 2026-01-01 01:00:58.494170 | orchestrator | =============================================================================== 2026-01-01 01:00:58.494175 | orchestrator | service-ks-register : barbican | Creating services --------------------- 65.74s 2026-01-01 01:00:58.494179 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.61s 2026-01-01 01:00:58.494182 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-01 01:00:58.494186 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-01-01 01:00:58.495828 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task f05f1b27-c9d2-4fe7-bfba-248a0c5339df is in state SUCCESS 2026-01-01 01:00:58.496683 | orchestrator | 2026-01-01 01:00:58.496691 | orchestrator | 2026-01-01 01:00:58.496696 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:00:58.496700 | orchestrator | 2026-01-01 01:00:58.496705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:00:58.496710 | orchestrator | Thursday 01 January 2026 00:59:48 +0000 (0:00:00.289) 0:00:00.289 ****** 2026-01-01 01:00:58.496715 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:00:58.496720 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:00:58.496725 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:00:58.496729 | orchestrator | 2026-01-01 01:00:58.496734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:00:58.496738 | orchestrator | Thursday 01 January 2026 00:59:49 +0000 (0:00:00.366) 0:00:00.656 ****** 2026-01-01 01:00:58.496743 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-01 01:00:58.496754 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-01 01:00:58.496759 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-01 01:00:58.496763 | orchestrator | 2026-01-01 01:00:58.496768 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-01 01:00:58.496773 | orchestrator | 2026-01-01 01:00:58.496777 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-01 01:00:58.496782 | orchestrator | Thursday 01 January 2026 00:59:49 +0000 (0:00:00.547) 0:00:01.204 ****** 2026-01-01 01:00:58.496787 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:00:58.496792 | orchestrator | 2026-01-01 01:00:58.496796 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-01 01:00:58.496801 | orchestrator | Thursday 01 January 2026 00:59:50 +0000 (0:00:00.689) 0:00:01.893 ****** 2026-01-01 01:00:58.496805 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (5 retries left). 2026-01-01 01:00:58.496810 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (4 retries left). 2026-01-01 01:00:58.496815 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (3 retries left). 2026-01-01 01:00:58.496819 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (2 retries left). 2026-01-01 01:00:58.496824 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (1 retries left). 2026-01-01 01:00:58.496842 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229254.9682174-3277-77309521325744/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229254.9682174-3277-77309521325744/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229254.9682174-3277-77309521325744/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_7brnzqoa/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_7brnzqoa/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_7brnzqoa/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_7brnzqoa/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_7brnzqoa/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:00:58.496850 | orchestrator | 2026-01-01 01:00:58.496854 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:00:58.496859 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:00:58.496867 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:00:58.496871 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:00:58.496875 | orchestrator | 2026-01-01 01:00:58.496879 | orchestrator | 2026-01-01 01:00:58.496883 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:00:58.496886 | orchestrator | Thursday 01 January 2026 01:00:56 +0000 (0:01:05.759) 0:01:07.653 ****** 2026-01-01 01:00:58.496890 | orchestrator | =============================================================================== 2026-01-01 01:00:58.496894 | orchestrator | service-ks-register : designate | Creating services -------------------- 65.76s 2026-01-01 01:00:58.496898 | orchestrator | designate : include_tasks ----------------------------------------------- 0.69s 2026-01-01 01:00:58.496902 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-01-01 01:00:58.496905 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-01-01 01:00:58.497906 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:00:58.501115 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:00:58.503237 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:00:58.506113 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:00:58.508716 | orchestrator | 2026-01-01 01:00:58 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:00:58.508740 | orchestrator | 2026-01-01 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:01.560052 | orchestrator | 2026-01-01 01:01:01 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state STARTED 2026-01-01 01:01:01.561290 | orchestrator | 2026-01-01 01:01:01 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:01.564140 | orchestrator | 2026-01-01 01:01:01 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:01.566593 | orchestrator | 2026-01-01 01:01:01 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:01.569664 | orchestrator | 2026-01-01 01:01:01 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:01.569700 | orchestrator | 2026-01-01 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:04.613380 | orchestrator | 2026-01-01 01:01:04 | INFO  | Task e96b7077-4bd4-4976-914b-d3b887009974 is in state SUCCESS 2026-01-01 01:01:04.615507 | orchestrator | 2026-01-01 01:01:04.615589 | orchestrator | 2026-01-01 01:01:04.615604 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:01:04.615615 | orchestrator | 2026-01-01 01:01:04.615625 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:01:04.615636 | orchestrator | Thursday 01 January 2026 00:59:48 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-01-01 01:01:04.615646 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:04.615657 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:04.615667 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:04.615677 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:04.615686 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:04.615696 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:04.615705 | orchestrator | 2026-01-01 01:01:04.615715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:01:04.615747 | orchestrator | Thursday 01 January 2026 00:59:49 +0000 (0:00:00.880) 0:00:01.148 ****** 2026-01-01 01:01:04.615758 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-01 01:01:04.615768 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-01 01:01:04.615777 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-01 01:01:04.615787 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-01 01:01:04.615796 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-01 01:01:04.615806 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-01 01:01:04.615816 | orchestrator | 2026-01-01 01:01:04.615825 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-01 01:01:04.615835 | orchestrator | 2026-01-01 01:01:04.615845 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-01 01:01:04.615855 | orchestrator | Thursday 01 January 2026 00:59:50 +0000 (0:00:00.824) 0:00:01.972 ****** 2026-01-01 01:01:04.615865 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:01:04.615876 | orchestrator | 2026-01-01 01:01:04.615886 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-01 01:01:04.615896 | orchestrator | Thursday 01 January 2026 00:59:51 +0000 (0:00:01.325) 0:00:03.298 ****** 2026-01-01 01:01:04.615906 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:04.615915 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:04.615925 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:04.615935 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:04.615944 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:04.615953 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:04.615963 | orchestrator | 2026-01-01 01:01:04.615972 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-01 01:01:04.615982 | orchestrator | Thursday 01 January 2026 00:59:53 +0000 (0:00:01.379) 0:00:04.677 ****** 2026-01-01 01:01:04.615992 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:01:04.616002 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:01:04.616011 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:01:04.616021 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:01:04.616030 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:01:04.616041 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:01:04.616054 | orchestrator | 2026-01-01 01:01:04.616067 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-01 01:01:04.616078 | orchestrator | Thursday 01 January 2026 00:59:54 +0000 (0:00:01.175) 0:00:05.853 ****** 2026-01-01 01:01:04.616090 | orchestrator | ok: [testbed-node-0] => { 2026-01-01 01:01:04.616103 | orchestrator |  "changed": false, 2026-01-01 01:01:04.616115 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:01:04.616126 | orchestrator | } 2026-01-01 01:01:04.616138 | orchestrator | ok: [testbed-node-1] => { 2026-01-01 01:01:04.616150 | orchestrator |  "changed": false, 2026-01-01 01:01:04.616161 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:01:04.616173 | orchestrator | } 2026-01-01 01:01:04.616185 | orchestrator | ok: [testbed-node-2] => { 2026-01-01 01:01:04.616197 | orchestrator |  "changed": false, 2026-01-01 01:01:04.616209 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:01:04.616221 | orchestrator | } 2026-01-01 01:01:04.616232 | orchestrator | ok: [testbed-node-3] => { 2026-01-01 01:01:04.616243 | orchestrator |  "changed": false, 2026-01-01 01:01:04.616255 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:01:04.616265 | orchestrator | } 2026-01-01 01:01:04.616275 | orchestrator | ok: [testbed-node-4] => { 2026-01-01 01:01:04.616284 | orchestrator |  "changed": false, 2026-01-01 01:01:04.616294 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:01:04.616303 | orchestrator | } 2026-01-01 01:01:04.616313 | orchestrator | ok: [testbed-node-5] => { 2026-01-01 01:01:04.616323 | orchestrator |  "changed": false, 2026-01-01 01:01:04.616332 | orchestrator |  "msg": "All assertions passed" 2026-01-01 01:01:04.616349 | orchestrator | } 2026-01-01 01:01:04.616359 | orchestrator | 2026-01-01 01:01:04.616369 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-01 01:01:04.616379 | orchestrator | Thursday 01 January 2026 00:59:55 +0000 (0:00:00.838) 0:00:06.692 ****** 2026-01-01 01:01:04.616388 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:01:04.616398 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:01:04.616408 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:01:04.616417 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:01:04.616427 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:01:04.616436 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:01:04.616446 | orchestrator | 2026-01-01 01:01:04.616456 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-01 01:01:04.616496 | orchestrator | Thursday 01 January 2026 00:59:55 +0000 (0:00:00.635) 0:00:07.327 ****** 2026-01-01 01:01:04.616507 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (5 retries left). 2026-01-01 01:01:04.616517 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (4 retries left). 2026-01-01 01:01:04.616527 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (3 retries left). 2026-01-01 01:01:04.616536 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (2 retries left). 2026-01-01 01:01:04.616546 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (1 retries left). 2026-01-01 01:01:04.616610 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229259.9389179-3318-108316161577536/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229259.9389179-3318-108316161577536/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229259.9389179-3318-108316161577536/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_xvm0xpl6/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_xvm0xpl6/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_xvm0xpl6/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_xvm0xpl6/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_xvm0xpl6/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:01:04.616634 | orchestrator | 2026-01-01 01:01:04.616645 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:01:04.616655 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-01-01 01:01:04.616672 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:01:04.616682 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:01:04.616691 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:01:04.616701 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:01:04.616711 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:01:04.616720 | orchestrator | 2026-01-01 01:01:04.616730 | orchestrator | 2026-01-01 01:01:04.616739 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:01:04.616749 | orchestrator | Thursday 01 January 2026 01:01:01 +0000 (0:01:05.494) 0:01:12.822 ****** 2026-01-01 01:01:04.616758 | orchestrator | =============================================================================== 2026-01-01 01:01:04.616768 | orchestrator | service-ks-register : neutron | Creating services ---------------------- 65.49s 2026-01-01 01:01:04.616778 | orchestrator | neutron : Get container facts ------------------------------------------- 1.38s 2026-01-01 01:01:04.616787 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.33s 2026-01-01 01:01:04.616796 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.18s 2026-01-01 01:01:04.616806 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2026-01-01 01:01:04.616815 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.84s 2026-01-01 01:01:04.616825 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-01-01 01:01:04.616844 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.64s 2026-01-01 01:01:04.618318 | orchestrator | 2026-01-01 01:01:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:04.621196 | orchestrator | 2026-01-01 01:01:04 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:04.624678 | orchestrator | 2026-01-01 01:01:04 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:04.627444 | orchestrator | 2026-01-01 01:01:04 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:04.628551 | orchestrator | 2026-01-01 01:01:04 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:04.628993 | orchestrator | 2026-01-01 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:07.679118 | orchestrator | 2026-01-01 01:01:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:07.681276 | orchestrator | 2026-01-01 01:01:07 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:07.684102 | orchestrator | 2026-01-01 01:01:07 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:07.686118 | orchestrator | 2026-01-01 01:01:07 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:07.688290 | orchestrator | 2026-01-01 01:01:07 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:07.688582 | orchestrator | 2026-01-01 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:10.734217 | orchestrator | 2026-01-01 01:01:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:10.736394 | orchestrator | 2026-01-01 01:01:10 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:10.738787 | orchestrator | 2026-01-01 01:01:10 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:10.740959 | orchestrator | 2026-01-01 01:01:10 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:10.743354 | orchestrator | 2026-01-01 01:01:10 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:10.743403 | orchestrator | 2026-01-01 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:13.789715 | orchestrator | 2026-01-01 01:01:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:13.790511 | orchestrator | 2026-01-01 01:01:13 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:13.792076 | orchestrator | 2026-01-01 01:01:13 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:13.793560 | orchestrator | 2026-01-01 01:01:13 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:13.795222 | orchestrator | 2026-01-01 01:01:13 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:13.795315 | orchestrator | 2026-01-01 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:16.843155 | orchestrator | 2026-01-01 01:01:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:16.846426 | orchestrator | 2026-01-01 01:01:16 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:16.848104 | orchestrator | 2026-01-01 01:01:16 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:16.849669 | orchestrator | 2026-01-01 01:01:16 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:16.852307 | orchestrator | 2026-01-01 01:01:16 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:16.852346 | orchestrator | 2026-01-01 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:19.900040 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:19.901989 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:19.904505 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:19.907520 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:19.909110 | orchestrator | 2026-01-01 01:01:19 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:19.909439 | orchestrator | 2026-01-01 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:22.966140 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:22.966262 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:22.966276 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:22.966284 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:22.966291 | orchestrator | 2026-01-01 01:01:22 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:22.966326 | orchestrator | 2026-01-01 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:26.021885 | orchestrator | 2026-01-01 01:01:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:26.023131 | orchestrator | 2026-01-01 01:01:26 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:26.023178 | orchestrator | 2026-01-01 01:01:26 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:26.025374 | orchestrator | 2026-01-01 01:01:26 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:26.027444 | orchestrator | 2026-01-01 01:01:26 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:26.027558 | orchestrator | 2026-01-01 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:29.080850 | orchestrator | 2026-01-01 01:01:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:29.084394 | orchestrator | 2026-01-01 01:01:29 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:29.087727 | orchestrator | 2026-01-01 01:01:29 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state STARTED 2026-01-01 01:01:29.091874 | orchestrator | 2026-01-01 01:01:29 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:29.094530 | orchestrator | 2026-01-01 01:01:29 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:29.094657 | orchestrator | 2026-01-01 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:32.142910 | orchestrator | 2026-01-01 01:01:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:32.145071 | orchestrator | 2026-01-01 01:01:32 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:32.149169 | orchestrator | 2026-01-01 01:01:32 | INFO  | Task 62150cd9-e8bf-45ce-855c-5099bf88c85f is in state SUCCESS 2026-01-01 01:01:32.151434 | orchestrator | 2026-01-01 01:01:32 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:32.153477 | orchestrator | 2026-01-01 01:01:32 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:32.155287 | orchestrator | 2026-01-01 01:01:32 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:32.155760 | orchestrator | 2026-01-01 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:35.210686 | orchestrator | 2026-01-01 01:01:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:35.210786 | orchestrator | 2026-01-01 01:01:35 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:35.212949 | orchestrator | 2026-01-01 01:01:35 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:35.215352 | orchestrator | 2026-01-01 01:01:35 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:35.216311 | orchestrator | 2026-01-01 01:01:35 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:35.216349 | orchestrator | 2026-01-01 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:38.273398 | orchestrator | 2026-01-01 01:01:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:38.274919 | orchestrator | 2026-01-01 01:01:38 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:38.276964 | orchestrator | 2026-01-01 01:01:38 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:38.278673 | orchestrator | 2026-01-01 01:01:38 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:38.280412 | orchestrator | 2026-01-01 01:01:38 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:38.280488 | orchestrator | 2026-01-01 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:41.319058 | orchestrator | 2026-01-01 01:01:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:41.320193 | orchestrator | 2026-01-01 01:01:41 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:41.321935 | orchestrator | 2026-01-01 01:01:41 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:41.323418 | orchestrator | 2026-01-01 01:01:41 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:41.324857 | orchestrator | 2026-01-01 01:01:41 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:41.324881 | orchestrator | 2026-01-01 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:44.371937 | orchestrator | 2026-01-01 01:01:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:44.373243 | orchestrator | 2026-01-01 01:01:44 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:44.375403 | orchestrator | 2026-01-01 01:01:44 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:44.377118 | orchestrator | 2026-01-01 01:01:44 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:44.378340 | orchestrator | 2026-01-01 01:01:44 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:44.378818 | orchestrator | 2026-01-01 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:47.444987 | orchestrator | 2026-01-01 01:01:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:47.449481 | orchestrator | 2026-01-01 01:01:47 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:47.451587 | orchestrator | 2026-01-01 01:01:47 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:47.453653 | orchestrator | 2026-01-01 01:01:47 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:47.455780 | orchestrator | 2026-01-01 01:01:47 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:47.455834 | orchestrator | 2026-01-01 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:50.504975 | orchestrator | 2026-01-01 01:01:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:50.506167 | orchestrator | 2026-01-01 01:01:50 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:50.508071 | orchestrator | 2026-01-01 01:01:50 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:50.510001 | orchestrator | 2026-01-01 01:01:50 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:50.513641 | orchestrator | 2026-01-01 01:01:50 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:50.513691 | orchestrator | 2026-01-01 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:53.558837 | orchestrator | 2026-01-01 01:01:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:53.559986 | orchestrator | 2026-01-01 01:01:53 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:53.561942 | orchestrator | 2026-01-01 01:01:53 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:53.562971 | orchestrator | 2026-01-01 01:01:53 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:53.564792 | orchestrator | 2026-01-01 01:01:53 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:53.564837 | orchestrator | 2026-01-01 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:56.620780 | orchestrator | 2026-01-01 01:01:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:56.622792 | orchestrator | 2026-01-01 01:01:56 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:56.624655 | orchestrator | 2026-01-01 01:01:56 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:56.625773 | orchestrator | 2026-01-01 01:01:56 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:56.627664 | orchestrator | 2026-01-01 01:01:56 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:56.627693 | orchestrator | 2026-01-01 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:01:59.674092 | orchestrator | 2026-01-01 01:01:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:01:59.674709 | orchestrator | 2026-01-01 01:01:59 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:01:59.676028 | orchestrator | 2026-01-01 01:01:59 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:01:59.677576 | orchestrator | 2026-01-01 01:01:59 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:01:59.678896 | orchestrator | 2026-01-01 01:01:59 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:01:59.678946 | orchestrator | 2026-01-01 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:02.724341 | orchestrator | 2026-01-01 01:02:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:02.725696 | orchestrator | 2026-01-01 01:02:02 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:02:02.727471 | orchestrator | 2026-01-01 01:02:02 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:02.728519 | orchestrator | 2026-01-01 01:02:02 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:02:02.730138 | orchestrator | 2026-01-01 01:02:02 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:02.730181 | orchestrator | 2026-01-01 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:05.777110 | orchestrator | 2026-01-01 01:02:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:05.779330 | orchestrator | 2026-01-01 01:02:05 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:02:05.781404 | orchestrator | 2026-01-01 01:02:05 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:05.784237 | orchestrator | 2026-01-01 01:02:05 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:02:05.787730 | orchestrator | 2026-01-01 01:02:05 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:05.787784 | orchestrator | 2026-01-01 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:08.835535 | orchestrator | 2026-01-01 01:02:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:08.837401 | orchestrator | 2026-01-01 01:02:08 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state STARTED 2026-01-01 01:02:08.839089 | orchestrator | 2026-01-01 01:02:08 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:08.840571 | orchestrator | 2026-01-01 01:02:08 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state STARTED 2026-01-01 01:02:08.841864 | orchestrator | 2026-01-01 01:02:08 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:08.842117 | orchestrator | 2026-01-01 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:11.895353 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:11.897144 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:11.899386 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:11.901074 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 7ce3d765-99b2-41e3-a3e4-679592c677f7 is in state SUCCESS 2026-01-01 01:02:11.901755 | orchestrator | 2026-01-01 01:02:11.901773 | orchestrator | 2026-01-01 01:02:11.901779 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-01 01:02:11.901784 | orchestrator | 2026-01-01 01:02:11.901788 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-01 01:02:11.901794 | orchestrator | Thursday 01 January 2026 01:00:32 +0000 (0:00:00.233) 0:00:00.233 ****** 2026-01-01 01:02:11.901798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-01 01:02:11.901805 | orchestrator | 2026-01-01 01:02:11.901821 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-01 01:02:11.901825 | orchestrator | Thursday 01 January 2026 01:00:32 +0000 (0:00:00.258) 0:00:00.492 ****** 2026-01-01 01:02:11.901830 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-01 01:02:11.901834 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-01 01:02:11.901839 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-01 01:02:11.901844 | orchestrator | 2026-01-01 01:02:11.901849 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-01 01:02:11.901853 | orchestrator | Thursday 01 January 2026 01:00:33 +0000 (0:00:01.227) 0:00:01.719 ****** 2026-01-01 01:02:11.901857 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-01 01:02:11.901862 | orchestrator | 2026-01-01 01:02:11.901866 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-01 01:02:11.901870 | orchestrator | Thursday 01 January 2026 01:00:35 +0000 (0:00:01.701) 0:00:03.420 ****** 2026-01-01 01:02:11.901874 | orchestrator | changed: [testbed-manager] 2026-01-01 01:02:11.901879 | orchestrator | 2026-01-01 01:02:11.901883 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-01 01:02:11.901887 | orchestrator | Thursday 01 January 2026 01:00:36 +0000 (0:00:01.022) 0:00:04.443 ****** 2026-01-01 01:02:11.901891 | orchestrator | changed: [testbed-manager] 2026-01-01 01:02:11.901895 | orchestrator | 2026-01-01 01:02:11.901899 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-01 01:02:11.901903 | orchestrator | Thursday 01 January 2026 01:00:37 +0000 (0:00:00.993) 0:00:05.437 ****** 2026-01-01 01:02:11.901921 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-01 01:02:11.901925 | orchestrator | ok: [testbed-manager] 2026-01-01 01:02:11.901929 | orchestrator | 2026-01-01 01:02:11.901933 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-01 01:02:11.901937 | orchestrator | Thursday 01 January 2026 01:01:19 +0000 (0:00:42.029) 0:00:47.467 ****** 2026-01-01 01:02:11.901942 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-01 01:02:11.901946 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-01 01:02:11.901950 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-01 01:02:11.901954 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-01 01:02:11.901958 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-01 01:02:11.901962 | orchestrator | 2026-01-01 01:02:11.901966 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-01 01:02:11.901970 | orchestrator | Thursday 01 January 2026 01:01:24 +0000 (0:00:04.419) 0:00:51.887 ****** 2026-01-01 01:02:11.901974 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-01 01:02:11.901978 | orchestrator | 2026-01-01 01:02:11.901982 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-01 01:02:11.901987 | orchestrator | Thursday 01 January 2026 01:01:24 +0000 (0:00:00.500) 0:00:52.387 ****** 2026-01-01 01:02:11.901991 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:02:11.901995 | orchestrator | 2026-01-01 01:02:11.901998 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-01 01:02:11.902003 | orchestrator | Thursday 01 January 2026 01:01:24 +0000 (0:00:00.161) 0:00:52.549 ****** 2026-01-01 01:02:11.902007 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:02:11.902011 | orchestrator | 2026-01-01 01:02:11.902039 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-01 01:02:11.902044 | orchestrator | Thursday 01 January 2026 01:01:25 +0000 (0:00:00.535) 0:00:53.084 ****** 2026-01-01 01:02:11.902048 | orchestrator | changed: [testbed-manager] 2026-01-01 01:02:11.902052 | orchestrator | 2026-01-01 01:02:11.902056 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-01 01:02:11.902060 | orchestrator | Thursday 01 January 2026 01:01:26 +0000 (0:00:01.578) 0:00:54.662 ****** 2026-01-01 01:02:11.902064 | orchestrator | changed: [testbed-manager] 2026-01-01 01:02:11.902067 | orchestrator | 2026-01-01 01:02:11.902071 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-01 01:02:11.902075 | orchestrator | Thursday 01 January 2026 01:01:27 +0000 (0:00:00.750) 0:00:55.413 ****** 2026-01-01 01:02:11.902079 | orchestrator | changed: [testbed-manager] 2026-01-01 01:02:11.902083 | orchestrator | 2026-01-01 01:02:11.902087 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-01 01:02:11.902091 | orchestrator | Thursday 01 January 2026 01:01:28 +0000 (0:00:00.598) 0:00:56.011 ****** 2026-01-01 01:02:11.902095 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-01 01:02:11.902098 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-01 01:02:11.902102 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-01 01:02:11.902106 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-01 01:02:11.902110 | orchestrator | 2026-01-01 01:02:11.902114 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:02:11.902118 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-01 01:02:11.902124 | orchestrator | 2026-01-01 01:02:11.902128 | orchestrator | 2026-01-01 01:02:11.902140 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:02:11.902144 | orchestrator | Thursday 01 January 2026 01:01:29 +0000 (0:00:01.651) 0:00:57.662 ****** 2026-01-01 01:02:11.902149 | orchestrator | =============================================================================== 2026-01-01 01:02:11.902157 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.03s 2026-01-01 01:02:11.902161 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.42s 2026-01-01 01:02:11.902165 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.70s 2026-01-01 01:02:11.902173 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2026-01-01 01:02:11.902177 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.58s 2026-01-01 01:02:11.902181 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.23s 2026-01-01 01:02:11.902185 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2026-01-01 01:02:11.902189 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.99s 2026-01-01 01:02:11.902193 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2026-01-01 01:02:11.902197 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-01-01 01:02:11.902200 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-01-01 01:02:11.902205 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-01-01 01:02:11.902209 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-01-01 01:02:11.902213 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-01-01 01:02:11.902217 | orchestrator | 2026-01-01 01:02:11.902221 | orchestrator | 2026-01-01 01:02:11.902225 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:02:11.902229 | orchestrator | 2026-01-01 01:02:11.902233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:02:11.902237 | orchestrator | Thursday 01 January 2026 01:01:01 +0000 (0:00:00.292) 0:00:00.292 ****** 2026-01-01 01:02:11.902241 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:02:11.902245 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:02:11.902249 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:02:11.902253 | orchestrator | 2026-01-01 01:02:11.902257 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:02:11.902261 | orchestrator | Thursday 01 January 2026 01:01:01 +0000 (0:00:00.306) 0:00:00.598 ****** 2026-01-01 01:02:11.902265 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-01 01:02:11.902269 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-01 01:02:11.902273 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-01 01:02:11.902277 | orchestrator | 2026-01-01 01:02:11.902281 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-01 01:02:11.902285 | orchestrator | 2026-01-01 01:02:11.902289 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-01 01:02:11.902293 | orchestrator | Thursday 01 January 2026 01:01:02 +0000 (0:00:00.472) 0:00:01.070 ****** 2026-01-01 01:02:11.902297 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:02:11.902302 | orchestrator | 2026-01-01 01:02:11.902306 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-01 01:02:11.902310 | orchestrator | Thursday 01 January 2026 01:01:02 +0000 (0:00:00.539) 0:00:01.610 ****** 2026-01-01 01:02:11.902314 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (5 retries left). 2026-01-01 01:02:11.902318 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (4 retries left). 2026-01-01 01:02:11.902322 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (3 retries left). 2026-01-01 01:02:11.902326 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (2 retries left). 2026-01-01 01:02:11.902330 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (1 retries left). 2026-01-01 01:02:11.902359 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229328.3450003-3725-38634743514189/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229328.3450003-3725-38634743514189/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229328.3450003-3725-38634743514189/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_c303fyzq/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_c303fyzq/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_c303fyzq/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_c303fyzq/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_c303fyzq/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:02:11.902370 | orchestrator | 2026-01-01 01:02:11.902375 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:02:11.902379 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:02:11.902383 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:02:11.902389 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:02:11.902394 | orchestrator | 2026-01-01 01:02:11.902399 | orchestrator | 2026-01-01 01:02:11.902403 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:02:11.902408 | orchestrator | Thursday 01 January 2026 01:02:09 +0000 (0:01:06.797) 0:01:08.407 ****** 2026-01-01 01:02:11.902413 | orchestrator | =============================================================================== 2026-01-01 01:02:11.902442 | orchestrator | service-ks-register : placement | Creating services -------------------- 66.80s 2026-01-01 01:02:11.902447 | orchestrator | placement : include_tasks ----------------------------------------------- 0.54s 2026-01-01 01:02:11.902451 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-01-01 01:02:11.902456 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-01 01:02:11.903596 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:11.905134 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 599ea843-9607-4d23-9dec-5ba522cc2753 is in state SUCCESS 2026-01-01 01:02:11.905570 | orchestrator | 2026-01-01 01:02:11.905581 | orchestrator | 2026-01-01 01:02:11.905585 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:02:11.905589 | orchestrator | 2026-01-01 01:02:11.905593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:02:11.905597 | orchestrator | Thursday 01 January 2026 01:01:01 +0000 (0:00:00.263) 0:00:00.263 ****** 2026-01-01 01:02:11.905601 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:02:11.905606 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:02:11.905610 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:02:11.905614 | orchestrator | 2026-01-01 01:02:11.905618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:02:11.905622 | orchestrator | Thursday 01 January 2026 01:01:01 +0000 (0:00:00.314) 0:00:00.578 ****** 2026-01-01 01:02:11.905626 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-01 01:02:11.905631 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-01 01:02:11.905635 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-01 01:02:11.905639 | orchestrator | 2026-01-01 01:02:11.905643 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-01 01:02:11.905647 | orchestrator | 2026-01-01 01:02:11.905651 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-01 01:02:11.905655 | orchestrator | Thursday 01 January 2026 01:01:02 +0000 (0:00:00.471) 0:00:01.049 ****** 2026-01-01 01:02:11.905659 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:02:11.905663 | orchestrator | 2026-01-01 01:02:11.905667 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-01 01:02:11.905671 | orchestrator | Thursday 01 January 2026 01:01:02 +0000 (0:00:00.529) 0:00:01.578 ****** 2026-01-01 01:02:11.905675 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (5 retries left). 2026-01-01 01:02:11.905679 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (4 retries left). 2026-01-01 01:02:11.905684 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (3 retries left). 2026-01-01 01:02:11.905688 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (2 retries left). 2026-01-01 01:02:11.905692 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (1 retries left). 2026-01-01 01:02:11.905710 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 985, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1767229328.560766-3743-139669607716588/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1767229328.560766-3743-139669607716588/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1767229328.560766-3743-139669607716588/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_keystone_service_payload_pt5kjuqu/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_pt5kjuqu/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_pt5kjuqu/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_pt5kjuqu/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_pt5kjuqu/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Service Unavailable (HTTP 503)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2026-01-01 01:02:11.905725 | orchestrator | 2026-01-01 01:02:11.905729 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:02:11.905736 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-01-01 01:02:11.905740 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:02:11.905744 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:02:11.905749 | orchestrator | 2026-01-01 01:02:11.905753 | orchestrator | 2026-01-01 01:02:11.905757 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:02:11.905761 | orchestrator | Thursday 01 January 2026 01:02:09 +0000 (0:01:06.993) 0:01:08.572 ****** 2026-01-01 01:02:11.905765 | orchestrator | =============================================================================== 2026-01-01 01:02:11.905769 | orchestrator | service-ks-register : magnum | Creating services ----------------------- 66.99s 2026-01-01 01:02:11.905773 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.53s 2026-01-01 01:02:11.905778 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-01-01 01:02:11.905782 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-01 01:02:11.906680 | orchestrator | 2026-01-01 01:02:11 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:11.906771 | orchestrator | 2026-01-01 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:14.957271 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:14.957878 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:14.958838 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:14.959994 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:14.960910 | orchestrator | 2026-01-01 01:02:14 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:14.960938 | orchestrator | 2026-01-01 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:17.999309 | orchestrator | 2026-01-01 01:02:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:18.001126 | orchestrator | 2026-01-01 01:02:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:18.002498 | orchestrator | 2026-01-01 01:02:18 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:18.004100 | orchestrator | 2026-01-01 01:02:18 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:18.005389 | orchestrator | 2026-01-01 01:02:18 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:18.005503 | orchestrator | 2026-01-01 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:21.051308 | orchestrator | 2026-01-01 01:02:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:21.051408 | orchestrator | 2026-01-01 01:02:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:21.051497 | orchestrator | 2026-01-01 01:02:21 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:21.051510 | orchestrator | 2026-01-01 01:02:21 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:21.052270 | orchestrator | 2026-01-01 01:02:21 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:21.052303 | orchestrator | 2026-01-01 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:24.089257 | orchestrator | 2026-01-01 01:02:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:24.090142 | orchestrator | 2026-01-01 01:02:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:24.091215 | orchestrator | 2026-01-01 01:02:24 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:24.092504 | orchestrator | 2026-01-01 01:02:24 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:24.093558 | orchestrator | 2026-01-01 01:02:24 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:24.094570 | orchestrator | 2026-01-01 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:27.134699 | orchestrator | 2026-01-01 01:02:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:27.135383 | orchestrator | 2026-01-01 01:02:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:27.136840 | orchestrator | 2026-01-01 01:02:27 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:27.138779 | orchestrator | 2026-01-01 01:02:27 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:27.140534 | orchestrator | 2026-01-01 01:02:27 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:27.140559 | orchestrator | 2026-01-01 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:30.187864 | orchestrator | 2026-01-01 01:02:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:30.189554 | orchestrator | 2026-01-01 01:02:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:30.192586 | orchestrator | 2026-01-01 01:02:30 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:30.194799 | orchestrator | 2026-01-01 01:02:30 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:30.197452 | orchestrator | 2026-01-01 01:02:30 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:30.197599 | orchestrator | 2026-01-01 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:33.277655 | orchestrator | 2026-01-01 01:02:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:33.277759 | orchestrator | 2026-01-01 01:02:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:33.277775 | orchestrator | 2026-01-01 01:02:33 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:33.277816 | orchestrator | 2026-01-01 01:02:33 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:33.277828 | orchestrator | 2026-01-01 01:02:33 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:33.277839 | orchestrator | 2026-01-01 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:36.328770 | orchestrator | 2026-01-01 01:02:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:36.329520 | orchestrator | 2026-01-01 01:02:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:36.331663 | orchestrator | 2026-01-01 01:02:36 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:36.333739 | orchestrator | 2026-01-01 01:02:36 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:36.335624 | orchestrator | 2026-01-01 01:02:36 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:36.335659 | orchestrator | 2026-01-01 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:39.380350 | orchestrator | 2026-01-01 01:02:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:39.382466 | orchestrator | 2026-01-01 01:02:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:39.384568 | orchestrator | 2026-01-01 01:02:39 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:39.387161 | orchestrator | 2026-01-01 01:02:39 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:39.389872 | orchestrator | 2026-01-01 01:02:39 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:39.390087 | orchestrator | 2026-01-01 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:42.430252 | orchestrator | 2026-01-01 01:02:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:42.430515 | orchestrator | 2026-01-01 01:02:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:42.431180 | orchestrator | 2026-01-01 01:02:42 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:42.432189 | orchestrator | 2026-01-01 01:02:42 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:42.432770 | orchestrator | 2026-01-01 01:02:42 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:42.432810 | orchestrator | 2026-01-01 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:45.476496 | orchestrator | 2026-01-01 01:02:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:45.477582 | orchestrator | 2026-01-01 01:02:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:45.478718 | orchestrator | 2026-01-01 01:02:45 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:45.480319 | orchestrator | 2026-01-01 01:02:45 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:45.481334 | orchestrator | 2026-01-01 01:02:45 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:45.482234 | orchestrator | 2026-01-01 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:48.523762 | orchestrator | 2026-01-01 01:02:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:48.524292 | orchestrator | 2026-01-01 01:02:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:48.526124 | orchestrator | 2026-01-01 01:02:48 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:48.527367 | orchestrator | 2026-01-01 01:02:48 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:48.528080 | orchestrator | 2026-01-01 01:02:48 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:48.528297 | orchestrator | 2026-01-01 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:51.554919 | orchestrator | 2026-01-01 01:02:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:51.555462 | orchestrator | 2026-01-01 01:02:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:51.556214 | orchestrator | 2026-01-01 01:02:51 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:51.556953 | orchestrator | 2026-01-01 01:02:51 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:51.557854 | orchestrator | 2026-01-01 01:02:51 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:51.557972 | orchestrator | 2026-01-01 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:54.586830 | orchestrator | 2026-01-01 01:02:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:54.587113 | orchestrator | 2026-01-01 01:02:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:54.588565 | orchestrator | 2026-01-01 01:02:54 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:54.588669 | orchestrator | 2026-01-01 01:02:54 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:54.589692 | orchestrator | 2026-01-01 01:02:54 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:54.589734 | orchestrator | 2026-01-01 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:02:57.634884 | orchestrator | 2026-01-01 01:02:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:02:57.634992 | orchestrator | 2026-01-01 01:02:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:02:57.636457 | orchestrator | 2026-01-01 01:02:57 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:02:57.637989 | orchestrator | 2026-01-01 01:02:57 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:02:57.639508 | orchestrator | 2026-01-01 01:02:57 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:02:57.639793 | orchestrator | 2026-01-01 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:00.683075 | orchestrator | 2026-01-01 01:03:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:00.684086 | orchestrator | 2026-01-01 01:03:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:00.686587 | orchestrator | 2026-01-01 01:03:00 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:00.687857 | orchestrator | 2026-01-01 01:03:00 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:03:00.688795 | orchestrator | 2026-01-01 01:03:00 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:00.688824 | orchestrator | 2026-01-01 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:03.746478 | orchestrator | 2026-01-01 01:03:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:03.748279 | orchestrator | 2026-01-01 01:03:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:03.750544 | orchestrator | 2026-01-01 01:03:03 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:03.752690 | orchestrator | 2026-01-01 01:03:03 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:03:03.755350 | orchestrator | 2026-01-01 01:03:03 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:03.755368 | orchestrator | 2026-01-01 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:06.794992 | orchestrator | 2026-01-01 01:03:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:06.795096 | orchestrator | 2026-01-01 01:03:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:06.795812 | orchestrator | 2026-01-01 01:03:06 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:06.796623 | orchestrator | 2026-01-01 01:03:06 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:03:06.797807 | orchestrator | 2026-01-01 01:03:06 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:06.797859 | orchestrator | 2026-01-01 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:09.833994 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:09.837890 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:09.841981 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:09.845802 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state STARTED 2026-01-01 01:03:09.850504 | orchestrator | 2026-01-01 01:03:09 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:09.851315 | orchestrator | 2026-01-01 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:12.903125 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:12.905199 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:12.908443 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:12.910276 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task 5a9003e1-ab39-4803-8067-aacfa0348e64 is in state SUCCESS 2026-01-01 01:03:12.911623 | orchestrator | 2026-01-01 01:03:12 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:12.911800 | orchestrator | 2026-01-01 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:15.968794 | orchestrator | 2026-01-01 01:03:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:15.970314 | orchestrator | 2026-01-01 01:03:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:15.972084 | orchestrator | 2026-01-01 01:03:15 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:15.973627 | orchestrator | 2026-01-01 01:03:15 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:15.973696 | orchestrator | 2026-01-01 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:19.024527 | orchestrator | 2026-01-01 01:03:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:19.026190 | orchestrator | 2026-01-01 01:03:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:19.028423 | orchestrator | 2026-01-01 01:03:19 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:19.030145 | orchestrator | 2026-01-01 01:03:19 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:19.030294 | orchestrator | 2026-01-01 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:22.069280 | orchestrator | 2026-01-01 01:03:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:22.071196 | orchestrator | 2026-01-01 01:03:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:22.073045 | orchestrator | 2026-01-01 01:03:22 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:22.075246 | orchestrator | 2026-01-01 01:03:22 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:22.075532 | orchestrator | 2026-01-01 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:25.106729 | orchestrator | 2026-01-01 01:03:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:25.108731 | orchestrator | 2026-01-01 01:03:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:25.112817 | orchestrator | 2026-01-01 01:03:25 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:25.115178 | orchestrator | 2026-01-01 01:03:25 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:25.115485 | orchestrator | 2026-01-01 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:28.143357 | orchestrator | 2026-01-01 01:03:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:28.145020 | orchestrator | 2026-01-01 01:03:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:28.146850 | orchestrator | 2026-01-01 01:03:28 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:28.148247 | orchestrator | 2026-01-01 01:03:28 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:28.148409 | orchestrator | 2026-01-01 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:31.184052 | orchestrator | 2026-01-01 01:03:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:31.187777 | orchestrator | 2026-01-01 01:03:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:31.187850 | orchestrator | 2026-01-01 01:03:31 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:31.188917 | orchestrator | 2026-01-01 01:03:31 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:31.189035 | orchestrator | 2026-01-01 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:34.236642 | orchestrator | 2026-01-01 01:03:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:34.238658 | orchestrator | 2026-01-01 01:03:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:34.239427 | orchestrator | 2026-01-01 01:03:34 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:34.240728 | orchestrator | 2026-01-01 01:03:34 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:34.240765 | orchestrator | 2026-01-01 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:37.285832 | orchestrator | 2026-01-01 01:03:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:37.287219 | orchestrator | 2026-01-01 01:03:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:37.289098 | orchestrator | 2026-01-01 01:03:37 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:37.290868 | orchestrator | 2026-01-01 01:03:37 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:37.291023 | orchestrator | 2026-01-01 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:40.329428 | orchestrator | 2026-01-01 01:03:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:40.331805 | orchestrator | 2026-01-01 01:03:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:40.333831 | orchestrator | 2026-01-01 01:03:40 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:40.335186 | orchestrator | 2026-01-01 01:03:40 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:40.335282 | orchestrator | 2026-01-01 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:43.393201 | orchestrator | 2026-01-01 01:03:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:43.395933 | orchestrator | 2026-01-01 01:03:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:43.398157 | orchestrator | 2026-01-01 01:03:43 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:43.399959 | orchestrator | 2026-01-01 01:03:43 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:43.400010 | orchestrator | 2026-01-01 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:46.443331 | orchestrator | 2026-01-01 01:03:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:46.445940 | orchestrator | 2026-01-01 01:03:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:46.448223 | orchestrator | 2026-01-01 01:03:46 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:46.449459 | orchestrator | 2026-01-01 01:03:46 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:46.449494 | orchestrator | 2026-01-01 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:49.491449 | orchestrator | 2026-01-01 01:03:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:49.493300 | orchestrator | 2026-01-01 01:03:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:49.493865 | orchestrator | 2026-01-01 01:03:49 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:49.497661 | orchestrator | 2026-01-01 01:03:49 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:49.497989 | orchestrator | 2026-01-01 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:52.540081 | orchestrator | 2026-01-01 01:03:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:52.542213 | orchestrator | 2026-01-01 01:03:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:52.544380 | orchestrator | 2026-01-01 01:03:52 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:52.546240 | orchestrator | 2026-01-01 01:03:52 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:52.546328 | orchestrator | 2026-01-01 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:55.586149 | orchestrator | 2026-01-01 01:03:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:55.589190 | orchestrator | 2026-01-01 01:03:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:55.591126 | orchestrator | 2026-01-01 01:03:55 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:55.592558 | orchestrator | 2026-01-01 01:03:55 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:55.592605 | orchestrator | 2026-01-01 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:03:58.632611 | orchestrator | 2026-01-01 01:03:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:03:58.633769 | orchestrator | 2026-01-01 01:03:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:03:58.635305 | orchestrator | 2026-01-01 01:03:58 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:03:58.636674 | orchestrator | 2026-01-01 01:03:58 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:03:58.636903 | orchestrator | 2026-01-01 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:01.693191 | orchestrator | 2026-01-01 01:04:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:01.694166 | orchestrator | 2026-01-01 01:04:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:01.696044 | orchestrator | 2026-01-01 01:04:01 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:01.697023 | orchestrator | 2026-01-01 01:04:01 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:01.697187 | orchestrator | 2026-01-01 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:04.742922 | orchestrator | 2026-01-01 01:04:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:04.744302 | orchestrator | 2026-01-01 01:04:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:04.745861 | orchestrator | 2026-01-01 01:04:04 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:04.747903 | orchestrator | 2026-01-01 01:04:04 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:04.747944 | orchestrator | 2026-01-01 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:07.792950 | orchestrator | 2026-01-01 01:04:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:07.794893 | orchestrator | 2026-01-01 01:04:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:07.797757 | orchestrator | 2026-01-01 01:04:07 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:07.800480 | orchestrator | 2026-01-01 01:04:07 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:07.801437 | orchestrator | 2026-01-01 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:10.831317 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:10.832852 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:10.833773 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:10.834958 | orchestrator | 2026-01-01 01:04:10 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:10.834996 | orchestrator | 2026-01-01 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:13.879442 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:13.881080 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:13.885123 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:13.887669 | orchestrator | 2026-01-01 01:04:13 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:13.887721 | orchestrator | 2026-01-01 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:16.927198 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:16.929212 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:16.931911 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:16.934618 | orchestrator | 2026-01-01 01:04:16 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:16.934660 | orchestrator | 2026-01-01 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:19.970834 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:19.971757 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:19.973293 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:19.974531 | orchestrator | 2026-01-01 01:04:19 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:19.974623 | orchestrator | 2026-01-01 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:23.030996 | orchestrator | 2026-01-01 01:04:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:23.034326 | orchestrator | 2026-01-01 01:04:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:23.036866 | orchestrator | 2026-01-01 01:04:23 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:23.038635 | orchestrator | 2026-01-01 01:04:23 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:23.038678 | orchestrator | 2026-01-01 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:26.095561 | orchestrator | 2026-01-01 01:04:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:26.095709 | orchestrator | 2026-01-01 01:04:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:26.096398 | orchestrator | 2026-01-01 01:04:26 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:26.099919 | orchestrator | 2026-01-01 01:04:26 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:26.099946 | orchestrator | 2026-01-01 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:29.145976 | orchestrator | 2026-01-01 01:04:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:29.149312 | orchestrator | 2026-01-01 01:04:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:29.151320 | orchestrator | 2026-01-01 01:04:29 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:29.153891 | orchestrator | 2026-01-01 01:04:29 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:29.153946 | orchestrator | 2026-01-01 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:32.206753 | orchestrator | 2026-01-01 01:04:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:32.209081 | orchestrator | 2026-01-01 01:04:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:32.212030 | orchestrator | 2026-01-01 01:04:32 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:32.214210 | orchestrator | 2026-01-01 01:04:32 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:32.214236 | orchestrator | 2026-01-01 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:35.270864 | orchestrator | 2026-01-01 01:04:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:35.272478 | orchestrator | 2026-01-01 01:04:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:35.274126 | orchestrator | 2026-01-01 01:04:35 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:35.276315 | orchestrator | 2026-01-01 01:04:35 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:35.276378 | orchestrator | 2026-01-01 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:38.334566 | orchestrator | 2026-01-01 01:04:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:38.334653 | orchestrator | 2026-01-01 01:04:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:38.334664 | orchestrator | 2026-01-01 01:04:38 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:38.334673 | orchestrator | 2026-01-01 01:04:38 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:38.334682 | orchestrator | 2026-01-01 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:41.366240 | orchestrator | 2026-01-01 01:04:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:41.367712 | orchestrator | 2026-01-01 01:04:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:41.368660 | orchestrator | 2026-01-01 01:04:41 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:41.369804 | orchestrator | 2026-01-01 01:04:41 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:41.369857 | orchestrator | 2026-01-01 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:44.409809 | orchestrator | 2026-01-01 01:04:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:44.411567 | orchestrator | 2026-01-01 01:04:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:44.412983 | orchestrator | 2026-01-01 01:04:44 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:44.414900 | orchestrator | 2026-01-01 01:04:44 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:44.415108 | orchestrator | 2026-01-01 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:47.460500 | orchestrator | 2026-01-01 01:04:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:47.462089 | orchestrator | 2026-01-01 01:04:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:47.462794 | orchestrator | 2026-01-01 01:04:47 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:47.463918 | orchestrator | 2026-01-01 01:04:47 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:47.463946 | orchestrator | 2026-01-01 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:50.513532 | orchestrator | 2026-01-01 01:04:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:50.515070 | orchestrator | 2026-01-01 01:04:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:50.517469 | orchestrator | 2026-01-01 01:04:50 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:50.520031 | orchestrator | 2026-01-01 01:04:50 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:50.520513 | orchestrator | 2026-01-01 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:53.569614 | orchestrator | 2026-01-01 01:04:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:53.571968 | orchestrator | 2026-01-01 01:04:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:53.575982 | orchestrator | 2026-01-01 01:04:53 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:53.577700 | orchestrator | 2026-01-01 01:04:53 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:53.577746 | orchestrator | 2026-01-01 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:56.619070 | orchestrator | 2026-01-01 01:04:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:56.619206 | orchestrator | 2026-01-01 01:04:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:56.621703 | orchestrator | 2026-01-01 01:04:56 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:56.623582 | orchestrator | 2026-01-01 01:04:56 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:56.623638 | orchestrator | 2026-01-01 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:04:59.665980 | orchestrator | 2026-01-01 01:04:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:04:59.666157 | orchestrator | 2026-01-01 01:04:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:04:59.666613 | orchestrator | 2026-01-01 01:04:59 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:04:59.668483 | orchestrator | 2026-01-01 01:04:59 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:04:59.669593 | orchestrator | 2026-01-01 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:02.720600 | orchestrator | 2026-01-01 01:05:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:02.723375 | orchestrator | 2026-01-01 01:05:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:02.725623 | orchestrator | 2026-01-01 01:05:02 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:02.728409 | orchestrator | 2026-01-01 01:05:02 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:02.728431 | orchestrator | 2026-01-01 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:05.775659 | orchestrator | 2026-01-01 01:05:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:05.777913 | orchestrator | 2026-01-01 01:05:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:05.778938 | orchestrator | 2026-01-01 01:05:05 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:05.780720 | orchestrator | 2026-01-01 01:05:05 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:05.780777 | orchestrator | 2026-01-01 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:08.823181 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:08.825161 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:08.829174 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:08.830760 | orchestrator | 2026-01-01 01:05:08 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:08.830855 | orchestrator | 2026-01-01 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:11.875182 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:11.876493 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:11.879402 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:11.881412 | orchestrator | 2026-01-01 01:05:11 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:11.881440 | orchestrator | 2026-01-01 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:14.937071 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:14.938624 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:14.943569 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:14.945492 | orchestrator | 2026-01-01 01:05:14 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:14.945900 | orchestrator | 2026-01-01 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:17.997076 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:17.998290 | orchestrator | 2026-01-01 01:05:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:18.001250 | orchestrator | 2026-01-01 01:05:18 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:18.005141 | orchestrator | 2026-01-01 01:05:18 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:18.005401 | orchestrator | 2026-01-01 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:21.054528 | orchestrator | 2026-01-01 01:05:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:21.055665 | orchestrator | 2026-01-01 01:05:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:21.056626 | orchestrator | 2026-01-01 01:05:21 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:21.057929 | orchestrator | 2026-01-01 01:05:21 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:21.057953 | orchestrator | 2026-01-01 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:24.100414 | orchestrator | 2026-01-01 01:05:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:24.102082 | orchestrator | 2026-01-01 01:05:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:24.103551 | orchestrator | 2026-01-01 01:05:24 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state STARTED 2026-01-01 01:05:24.105452 | orchestrator | 2026-01-01 01:05:24 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:24.105482 | orchestrator | 2026-01-01 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:27.151658 | orchestrator | 2026-01-01 01:05:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:27.151848 | orchestrator | 2026-01-01 01:05:27 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:27.157010 | orchestrator | 2026-01-01 01:05:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:27.167239 | orchestrator | 2026-01-01 01:05:27.167355 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-01 01:05:27.167374 | orchestrator | 2.16.14 2026-01-01 01:05:27.167387 | orchestrator | 2026-01-01 01:05:27.167398 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-01 01:05:27.167409 | orchestrator | 2026-01-01 01:05:27.167421 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-01 01:05:27.167432 | orchestrator | Thursday 01 January 2026 01:01:34 +0000 (0:00:00.268) 0:00:00.268 ****** 2026-01-01 01:05:27.167444 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.167456 | orchestrator | 2026-01-01 01:05:27.167467 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-01 01:05:27.167478 | orchestrator | Thursday 01 January 2026 01:01:36 +0000 (0:00:01.613) 0:00:01.882 ****** 2026-01-01 01:05:27.167488 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.167499 | orchestrator | 2026-01-01 01:05:27.167510 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-01 01:05:27.167521 | orchestrator | Thursday 01 January 2026 01:01:37 +0000 (0:00:01.068) 0:00:02.951 ****** 2026-01-01 01:05:27.167531 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.167610 | orchestrator | 2026-01-01 01:05:27.167664 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-01 01:05:27.167676 | orchestrator | Thursday 01 January 2026 01:01:38 +0000 (0:00:01.134) 0:00:04.085 ****** 2026-01-01 01:05:27.167688 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.167699 | orchestrator | 2026-01-01 01:05:27.167710 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-01 01:05:27.167721 | orchestrator | Thursday 01 January 2026 01:01:39 +0000 (0:00:01.262) 0:00:05.348 ****** 2026-01-01 01:05:27.167788 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.167939 | orchestrator | 2026-01-01 01:05:27.167954 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-01 01:05:27.167968 | orchestrator | Thursday 01 January 2026 01:01:41 +0000 (0:00:01.110) 0:00:06.459 ****** 2026-01-01 01:05:27.167981 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.167995 | orchestrator | 2026-01-01 01:05:27.168008 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-01 01:05:27.168022 | orchestrator | Thursday 01 January 2026 01:01:42 +0000 (0:00:01.199) 0:00:07.658 ****** 2026-01-01 01:05:27.168035 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.168049 | orchestrator | 2026-01-01 01:05:27.168061 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-01 01:05:27.168075 | orchestrator | Thursday 01 January 2026 01:01:44 +0000 (0:00:02.101) 0:00:09.760 ****** 2026-01-01 01:05:27.168088 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.168101 | orchestrator | 2026-01-01 01:05:27.168115 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-01 01:05:27.168127 | orchestrator | Thursday 01 January 2026 01:01:45 +0000 (0:00:01.334) 0:00:11.095 ****** 2026-01-01 01:05:27.168141 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.168169 | orchestrator | 2026-01-01 01:05:27.168191 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-01 01:05:27.168202 | orchestrator | Thursday 01 January 2026 01:02:45 +0000 (0:01:00.045) 0:01:11.140 ****** 2026-01-01 01:05:27.168213 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.168224 | orchestrator | 2026-01-01 01:05:27.168387 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-01 01:05:27.168401 | orchestrator | 2026-01-01 01:05:27.168460 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-01 01:05:27.168472 | orchestrator | Thursday 01 January 2026 01:02:45 +0000 (0:00:00.159) 0:01:11.299 ****** 2026-01-01 01:05:27.168483 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.168494 | orchestrator | 2026-01-01 01:05:27.168571 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-01 01:05:27.168583 | orchestrator | 2026-01-01 01:05:27.168594 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-01 01:05:27.168620 | orchestrator | Thursday 01 January 2026 01:02:57 +0000 (0:00:11.690) 0:01:22.990 ****** 2026-01-01 01:05:27.168631 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.168683 | orchestrator | 2026-01-01 01:05:27.168694 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-01 01:05:27.168705 | orchestrator | 2026-01-01 01:05:27.168715 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-01 01:05:27.168726 | orchestrator | Thursday 01 January 2026 01:02:58 +0000 (0:00:01.316) 0:01:24.307 ****** 2026-01-01 01:05:27.168787 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.168799 | orchestrator | 2026-01-01 01:05:27.168809 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:27.168821 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-01 01:05:27.168834 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:27.168845 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:27.168856 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:05:27.168867 | orchestrator | 2026-01-01 01:05:27.168878 | orchestrator | 2026-01-01 01:05:27.168888 | orchestrator | 2026-01-01 01:05:27.168899 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:27.168994 | orchestrator | Thursday 01 January 2026 01:03:10 +0000 (0:00:11.207) 0:01:35.514 ****** 2026-01-01 01:05:27.169006 | orchestrator | =============================================================================== 2026-01-01 01:05:27.169017 | orchestrator | Create admin user ------------------------------------------------------ 60.05s 2026-01-01 01:05:27.169046 | orchestrator | Restart ceph manager service ------------------------------------------- 24.21s 2026-01-01 01:05:27.169058 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-01-01 01:05:27.169069 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.61s 2026-01-01 01:05:27.169080 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.33s 2026-01-01 01:05:27.169090 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.26s 2026-01-01 01:05:27.169101 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.20s 2026-01-01 01:05:27.169112 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2026-01-01 01:05:27.169122 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.11s 2026-01-01 01:05:27.169133 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.07s 2026-01-01 01:05:27.169144 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-01-01 01:05:27.169155 | orchestrator | 2026-01-01 01:05:27.169166 | orchestrator | 2026-01-01 01:05:27.169177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:05:27.169187 | orchestrator | 2026-01-01 01:05:27.169198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:05:27.169209 | orchestrator | Thursday 01 January 2026 01:02:14 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-01-01 01:05:27.169220 | orchestrator | ok: [testbed-manager] 2026-01-01 01:05:27.169231 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:05:27.169242 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:05:27.169253 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:05:27.169264 | orchestrator | ok: [testbed-node-3] 2026-01-01 01:05:27.169274 | orchestrator | ok: [testbed-node-4] 2026-01-01 01:05:27.169285 | orchestrator | ok: [testbed-node-5] 2026-01-01 01:05:27.169314 | orchestrator | 2026-01-01 01:05:27.169325 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:05:27.169336 | orchestrator | Thursday 01 January 2026 01:02:15 +0000 (0:00:00.847) 0:00:01.129 ****** 2026-01-01 01:05:27.169347 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169358 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169369 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169379 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169390 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169401 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169411 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-01 01:05:27.169422 | orchestrator | 2026-01-01 01:05:27.169433 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-01 01:05:27.169443 | orchestrator | 2026-01-01 01:05:27.169454 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-01 01:05:27.169535 | orchestrator | Thursday 01 January 2026 01:02:16 +0000 (0:00:00.808) 0:00:01.938 ****** 2026-01-01 01:05:27.169546 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:05:27.169558 | orchestrator | 2026-01-01 01:05:27.169569 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-01 01:05:27.169580 | orchestrator | Thursday 01 January 2026 01:02:17 +0000 (0:00:01.564) 0:00:03.502 ****** 2026-01-01 01:05:27.169601 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 01:05:27.169626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169678 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169714 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169763 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.169792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169877 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 01:05:27.169893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.169958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.169993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170143 | orchestrator | 2026-01-01 01:05:27.170154 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-01 01:05:27.170166 | orchestrator | Thursday 01 January 2026 01:02:21 +0000 (0:00:03.116) 0:00:06.619 ****** 2026-01-01 01:05:27.170177 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-01 01:05:27.170188 | orchestrator | 2026-01-01 01:05:27.170199 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-01 01:05:27.170210 | orchestrator | Thursday 01 January 2026 01:02:22 +0000 (0:00:01.355) 0:00:07.974 ****** 2026-01-01 01:05:27.170227 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 01:05:27.170239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.170396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170480 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.170532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.170548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-e2026-01-01 01:05:27 | INFO  | Task 823cfce9-5188-49ce-98d3-f37a74406fdd is in state SUCCESS 2026-01-01 01:05:27.171248 | orchestrator | xporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.171358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.171402 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 01:05:27.171420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.171449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.171461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.171473 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.171500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.171513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.171532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.171544 | orchestrator | 2026-01-01 01:05:27.171556 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-01 01:05:27.171569 | orchestrator | Thursday 01 January 2026 01:02:28 +0000 (0:00:05.875) 0:00:13.850 ****** 2026-01-01 01:05:27.171581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-01 01:05:27.171599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.171611 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.171630 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-01 01:05:27.171652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.171674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.171715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171727 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.171739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.171757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.171803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.171836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.171943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.171958 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.171972 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.171985 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.171998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172039 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.172057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172106 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.172124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172159 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.172170 | orchestrator | 2026-01-01 01:05:27.172182 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-01 01:05:27.172194 | orchestrator | Thursday 01 January 2026 01:02:29 +0000 (0:00:01.577) 0:00:15.427 ****** 2026-01-01 01:05:27.172205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-01 01:05:27.172236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172275 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-01 01:05:27.172318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172407 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.172419 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.172432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172537 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.172554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-01 01:05:27.172742 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.172761 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.172781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172863 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.172882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-01 01:05:27.172908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-01 01:05:27.172934 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.172954 | orchestrator | 2026-01-01 01:05:27.172973 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-01 01:05:27.172991 | orchestrator | Thursday 01 January 2026 01:02:31 +0000 (0:00:02.092) 0:00:17.520 ****** 2026-01-01 01:05:27.173009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173065 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 01:05:27.173086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173177 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.173196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173520 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 01:05:27.173541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.173674 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.173766 | orchestrator | 2026-01-01 01:05:27.173787 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-01 01:05:27.173806 | orchestrator | Thursday 01 January 2026 01:02:38 +0000 (0:00:06.550) 0:00:24.070 ****** 2026-01-01 01:05:27.173827 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:05:27.173846 | orchestrator | 2026-01-01 01:05:27.173865 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-01 01:05:27.173885 | orchestrator | Thursday 01 January 2026 01:02:39 +0000 (0:00:01.368) 0:00:25.439 ****** 2026-01-01 01:05:27.173906 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.173939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.173968 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.173989 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174011 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174111 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174134 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174153 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174188 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.174208 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327119, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2006183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174219 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174231 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174250 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174262 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174281 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174322 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174335 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174353 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.205179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174558 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202026-01-01 01:05:27 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:27.174579 | orchestrator | 2026-01-01 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:27.174590 | orchestrator | 5179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174602 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.205179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174624 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327167, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2081199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.174636 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174668 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174697 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.205179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174746 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.205179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174863 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174882 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.205179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174961 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.174981 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175023 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327105, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1985302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.175036 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175047 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175066 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175086 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175099 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175113 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175131 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175144 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175158 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175179 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175213 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175226 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175239 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175255 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327145, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.205179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.175347 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175366 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175397 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175419 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175431 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175457 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175469 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175482 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175501 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175530 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175558 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327089, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.194274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.175577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175622 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175643 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175663 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175700 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175718 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175737 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175755 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175767 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175778 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175790 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175801 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175836 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175855 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175867 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175879 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175890 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175902 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175918 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175936 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327124, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.175953 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175968 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.175992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176020 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176065 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176095 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176122 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176142 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.176163 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176182 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176202 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176221 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176258 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176278 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176334 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.176366 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176384 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176401 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176419 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176437 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176466 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.176498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176517 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.176536 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327142, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2049565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176564 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176583 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176602 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176622 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176654 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.176673 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-01 01:05:27.176693 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.176713 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327127, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2015142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327111, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1991925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176745 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327166, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2078862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327076, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1912317, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176769 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327185, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2110157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176781 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327161, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2076483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327096, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1959195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176817 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327083, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1930134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176829 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327132, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.202197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176846 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327131, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2021892, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176858 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327179, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.2099216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-01 01:05:27.176869 | orchestrator | 2026-01-01 01:05:27.176881 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-01 01:05:27.176893 | orchestrator | Thursday 01 January 2026 01:03:05 +0000 (0:00:25.948) 0:00:51.387 ****** 2026-01-01 01:05:27.176904 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:05:27.176915 | orchestrator | 2026-01-01 01:05:27.176926 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-01 01:05:27.176938 | orchestrator | Thursday 01 January 2026 01:03:06 +0000 (0:00:00.770) 0:00:52.158 ****** 2026-01-01 01:05:27.176949 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.176961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.176978 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.176989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177002 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177021 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:05:27.177038 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.177055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177073 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.177091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177109 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177127 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:05:27.177145 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.177163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177183 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.177201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177317 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177333 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.177344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177355 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.177366 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177376 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177387 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.177405 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177416 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.177427 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177438 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177449 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.177460 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177471 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.177482 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177493 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177504 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.177514 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177525 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-01 01:05:27.177536 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-01 01:05:27.177547 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-01 01:05:27.177557 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-01 01:05:27.177568 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-01 01:05:27.177579 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-01 01:05:27.177590 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-01 01:05:27.177601 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-01 01:05:27.177611 | orchestrator | 2026-01-01 01:05:27.177622 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-01 01:05:27.177633 | orchestrator | Thursday 01 January 2026 01:03:08 +0000 (0:00:01.737) 0:00:53.895 ****** 2026-01-01 01:05:27.177644 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:05:27.177667 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.177678 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:05:27.177689 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.177700 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:05:27.177711 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.177722 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:05:27.177733 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.177744 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:05:27.177755 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.177766 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-01 01:05:27.177777 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.177787 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-01 01:05:27.177798 | orchestrator | 2026-01-01 01:05:27.177809 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-01 01:05:27.177820 | orchestrator | Thursday 01 January 2026 01:03:23 +0000 (0:00:15.147) 0:01:09.042 ****** 2026-01-01 01:05:27.177831 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:05:27.177842 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.177853 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:05:27.177863 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.177875 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:05:27.177886 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.177896 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:05:27.177908 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.177926 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:05:27.177944 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.177963 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-01 01:05:27.177983 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.178002 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-01 01:05:27.178066 | orchestrator | 2026-01-01 01:05:27.178091 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-01 01:05:27.178112 | orchestrator | Thursday 01 January 2026 01:03:26 +0000 (0:00:02.658) 0:01:11.701 ****** 2026-01-01 01:05:27.178133 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:05:27.178166 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.178180 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:05:27.178191 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:05:27.178210 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:05:27.178221 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.178232 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.178243 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.178254 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-01 01:05:27.178276 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:05:27.178305 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.178317 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-01 01:05:27.178328 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.178339 | orchestrator | 2026-01-01 01:05:27.178350 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-01 01:05:27.178361 | orchestrator | Thursday 01 January 2026 01:03:27 +0000 (0:00:01.511) 0:01:13.213 ****** 2026-01-01 01:05:27.178372 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:05:27.178383 | orchestrator | 2026-01-01 01:05:27.178394 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-01 01:05:27.178405 | orchestrator | Thursday 01 January 2026 01:03:28 +0000 (0:00:00.706) 0:01:13.920 ****** 2026-01-01 01:05:27.178415 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.178426 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.178437 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.178448 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.178458 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.178469 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.178480 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.178490 | orchestrator | 2026-01-01 01:05:27.178501 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-01 01:05:27.178512 | orchestrator | Thursday 01 January 2026 01:03:29 +0000 (0:00:00.665) 0:01:14.585 ****** 2026-01-01 01:05:27.178523 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.178533 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.178544 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.178555 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.178566 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.178576 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.178587 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.178598 | orchestrator | 2026-01-01 01:05:27.178608 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-01 01:05:27.178619 | orchestrator | Thursday 01 January 2026 01:03:30 +0000 (0:00:01.908) 0:01:16.494 ****** 2026-01-01 01:05:27.178630 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178641 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.178652 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178663 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178674 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.178685 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.178696 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178706 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.178717 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178728 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.178739 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178750 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.178761 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-01 01:05:27.178771 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.178782 | orchestrator | 2026-01-01 01:05:27.178793 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-01 01:05:27.178804 | orchestrator | Thursday 01 January 2026 01:03:32 +0000 (0:00:01.298) 0:01:17.792 ****** 2026-01-01 01:05:27.178823 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:05:27.178835 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.178846 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:05:27.178857 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.178868 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:05:27.178879 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.178890 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:05:27.178901 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.178918 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:05:27.178929 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.178940 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-01 01:05:27.178951 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.178961 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-01 01:05:27.178972 | orchestrator | 2026-01-01 01:05:27.178990 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-01 01:05:27.179001 | orchestrator | Thursday 01 January 2026 01:03:33 +0000 (0:00:01.407) 0:01:19.200 ****** 2026-01-01 01:05:27.179012 | orchestrator | [WARNING]: Skipped 2026-01-01 01:05:27.179023 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-01 01:05:27.179034 | orchestrator | due to this access issue: 2026-01-01 01:05:27.179044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-01 01:05:27.179055 | orchestrator | not a directory 2026-01-01 01:05:27.179066 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-01 01:05:27.179077 | orchestrator | 2026-01-01 01:05:27.179088 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-01 01:05:27.179099 | orchestrator | Thursday 01 January 2026 01:03:34 +0000 (0:00:01.026) 0:01:20.226 ****** 2026-01-01 01:05:27.179111 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.179121 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.179132 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.179143 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.179154 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.179165 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.179175 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.179186 | orchestrator | 2026-01-01 01:05:27.179197 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-01 01:05:27.179208 | orchestrator | Thursday 01 January 2026 01:03:35 +0000 (0:00:00.728) 0:01:20.955 ****** 2026-01-01 01:05:27.179219 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.179230 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:05:27.179240 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:05:27.179251 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:05:27.179262 | orchestrator | skipping: [testbed-node-3] 2026-01-01 01:05:27.179273 | orchestrator | skipping: [testbed-node-4] 2026-01-01 01:05:27.179283 | orchestrator | skipping: [testbed-node-5] 2026-01-01 01:05:27.179313 | orchestrator | 2026-01-01 01:05:27.179324 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-01 01:05:27.179335 | orchestrator | Thursday 01 January 2026 01:03:36 +0000 (0:00:00.731) 0:01:21.687 ****** 2026-01-01 01:05:27.179348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179381 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-01 01:05:27.179399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179481 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-01 01:05:27.179493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179555 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179641 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-01 01:05:27.179655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-01 01:05:27.179709 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-01 01:05:27.179766 | orchestrator | 2026-01-01 01:05:27.179783 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-01 01:05:27.179795 | orchestrator | Thursday 01 January 2026 01:03:39 +0000 (0:00:03.501) 0:01:25.189 ****** 2026-01-01 01:05:27.179806 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-01 01:05:27.179817 | orchestrator | skipping: [testbed-manager] 2026-01-01 01:05:27.179828 | orchestrator | 2026-01-01 01:05:27.179839 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.179849 | orchestrator | Thursday 01 January 2026 01:03:40 +0000 (0:00:01.218) 0:01:26.408 ****** 2026-01-01 01:05:27.179860 | orchestrator | 2026-01-01 01:05:27.179871 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.179882 | orchestrator | Thursday 01 January 2026 01:03:40 +0000 (0:00:00.073) 0:01:26.481 ****** 2026-01-01 01:05:27.179893 | orchestrator | 2026-01-01 01:05:27.179904 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.179915 | orchestrator | Thursday 01 January 2026 01:03:41 +0000 (0:00:00.068) 0:01:26.549 ****** 2026-01-01 01:05:27.179926 | orchestrator | 2026-01-01 01:05:27.179937 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.179948 | orchestrator | Thursday 01 January 2026 01:03:41 +0000 (0:00:00.061) 0:01:26.611 ****** 2026-01-01 01:05:27.179959 | orchestrator | 2026-01-01 01:05:27.179969 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.179980 | orchestrator | Thursday 01 January 2026 01:03:41 +0000 (0:00:00.247) 0:01:26.859 ****** 2026-01-01 01:05:27.179991 | orchestrator | 2026-01-01 01:05:27.180002 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.180012 | orchestrator | Thursday 01 January 2026 01:03:41 +0000 (0:00:00.067) 0:01:26.926 ****** 2026-01-01 01:05:27.180023 | orchestrator | 2026-01-01 01:05:27.180034 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-01 01:05:27.180045 | orchestrator | Thursday 01 January 2026 01:03:41 +0000 (0:00:00.066) 0:01:26.993 ****** 2026-01-01 01:05:27.180055 | orchestrator | 2026-01-01 01:05:27.180066 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-01 01:05:27.180077 | orchestrator | Thursday 01 January 2026 01:03:41 +0000 (0:00:00.090) 0:01:27.083 ****** 2026-01-01 01:05:27.180088 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.180099 | orchestrator | 2026-01-01 01:05:27.180110 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-01 01:05:27.180121 | orchestrator | Thursday 01 January 2026 01:04:07 +0000 (0:00:26.125) 0:01:53.208 ****** 2026-01-01 01:05:27.180131 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.180142 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.180153 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:05:27.180164 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:05:27.180175 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.180185 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.180196 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:05:27.180207 | orchestrator | 2026-01-01 01:05:27.180218 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-01 01:05:27.180229 | orchestrator | Thursday 01 January 2026 01:04:20 +0000 (0:00:12.612) 0:02:05.821 ****** 2026-01-01 01:05:27.180239 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.180250 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.180261 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.180272 | orchestrator | 2026-01-01 01:05:27.180282 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-01 01:05:27.180313 | orchestrator | Thursday 01 January 2026 01:04:25 +0000 (0:00:05.425) 0:02:11.247 ****** 2026-01-01 01:05:27.180324 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.180335 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.180346 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.180357 | orchestrator | 2026-01-01 01:05:27.180375 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-01 01:05:27.180386 | orchestrator | Thursday 01 January 2026 01:04:35 +0000 (0:00:09.922) 0:02:21.169 ****** 2026-01-01 01:05:27.180397 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.180408 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.180418 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.180429 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:05:27.180440 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.180456 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:05:27.180468 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:05:27.180479 | orchestrator | 2026-01-01 01:05:27.180490 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-01 01:05:27.180500 | orchestrator | Thursday 01 January 2026 01:04:50 +0000 (0:00:14.667) 0:02:35.837 ****** 2026-01-01 01:05:27.180511 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.180522 | orchestrator | 2026-01-01 01:05:27.180533 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-01 01:05:27.180549 | orchestrator | Thursday 01 January 2026 01:04:58 +0000 (0:00:08.534) 0:02:44.372 ****** 2026-01-01 01:05:27.180560 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:05:27.180571 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:05:27.180582 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:05:27.180593 | orchestrator | 2026-01-01 01:05:27.180603 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-01 01:05:27.180614 | orchestrator | Thursday 01 January 2026 01:05:03 +0000 (0:00:05.002) 0:02:49.374 ****** 2026-01-01 01:05:27.180625 | orchestrator | changed: [testbed-manager] 2026-01-01 01:05:27.180636 | orchestrator | 2026-01-01 01:05:27.180647 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-01 01:05:27.180657 | orchestrator | Thursday 01 January 2026 01:05:15 +0000 (0:00:11.207) 0:03:00.582 ****** 2026-01-01 01:05:27.180668 | orchestrator | changed: [testbed-node-4] 2026-01-01 01:05:27.180679 | orchestrator | changed: [testbed-node-5] 2026-01-01 01:05:27.180690 | orchestrator | changed: [testbed-node-3] 2026-01-01 01:05:27.180701 | orchestrator | 2026-01-01 01:05:27.180712 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:05:27.180723 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-01 01:05:27.180734 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-01 01:05:27.180746 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-01 01:05:27.180757 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-01 01:05:27.180768 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:05:27.180779 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:05:27.180789 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-01 01:05:27.180800 | orchestrator | 2026-01-01 01:05:27.180812 | orchestrator | 2026-01-01 01:05:27.180823 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:05:27.180833 | orchestrator | Thursday 01 January 2026 01:05:24 +0000 (0:00:09.949) 0:03:10.531 ****** 2026-01-01 01:05:27.180845 | orchestrator | =============================================================================== 2026-01-01 01:05:27.180862 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 26.13s 2026-01-01 01:05:27.180873 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.95s 2026-01-01 01:05:27.180884 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.15s 2026-01-01 01:05:27.180895 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.67s 2026-01-01 01:05:27.180906 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.61s 2026-01-01 01:05:27.180916 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.21s 2026-01-01 01:05:27.180927 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.95s 2026-01-01 01:05:27.180938 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.92s 2026-01-01 01:05:27.180948 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.53s 2026-01-01 01:05:27.180959 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.55s 2026-01-01 01:05:27.180970 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.88s 2026-01-01 01:05:27.180980 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.43s 2026-01-01 01:05:27.180991 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.00s 2026-01-01 01:05:27.181002 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.50s 2026-01-01 01:05:27.181012 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.12s 2026-01-01 01:05:27.181023 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.66s 2026-01-01 01:05:27.181034 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.09s 2026-01-01 01:05:27.181045 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.91s 2026-01-01 01:05:27.181056 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.74s 2026-01-01 01:05:27.181067 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.58s 2026-01-01 01:05:30.225901 | orchestrator | 2026-01-01 01:05:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:30.227127 | orchestrator | 2026-01-01 01:05:30 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:30.231788 | orchestrator | 2026-01-01 01:05:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:30.232913 | orchestrator | 2026-01-01 01:05:30 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:30.232961 | orchestrator | 2026-01-01 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:33.275919 | orchestrator | 2026-01-01 01:05:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:33.278577 | orchestrator | 2026-01-01 01:05:33 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:33.279814 | orchestrator | 2026-01-01 01:05:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:33.281499 | orchestrator | 2026-01-01 01:05:33 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:33.281529 | orchestrator | 2026-01-01 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:36.332002 | orchestrator | 2026-01-01 01:05:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:36.333872 | orchestrator | 2026-01-01 01:05:36 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:36.335496 | orchestrator | 2026-01-01 01:05:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:36.338678 | orchestrator | 2026-01-01 01:05:36 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:36.338724 | orchestrator | 2026-01-01 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:39.387539 | orchestrator | 2026-01-01 01:05:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:39.391726 | orchestrator | 2026-01-01 01:05:39 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:39.394211 | orchestrator | 2026-01-01 01:05:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:39.396816 | orchestrator | 2026-01-01 01:05:39 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:39.396903 | orchestrator | 2026-01-01 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:42.444897 | orchestrator | 2026-01-01 01:05:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:42.445484 | orchestrator | 2026-01-01 01:05:42 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:42.449029 | orchestrator | 2026-01-01 01:05:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:42.450696 | orchestrator | 2026-01-01 01:05:42 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:42.450724 | orchestrator | 2026-01-01 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:45.489163 | orchestrator | 2026-01-01 01:05:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:45.490911 | orchestrator | 2026-01-01 01:05:45 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:45.492805 | orchestrator | 2026-01-01 01:05:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:45.494224 | orchestrator | 2026-01-01 01:05:45 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:45.494571 | orchestrator | 2026-01-01 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:48.541818 | orchestrator | 2026-01-01 01:05:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:48.551773 | orchestrator | 2026-01-01 01:05:48 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:48.551827 | orchestrator | 2026-01-01 01:05:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:48.551840 | orchestrator | 2026-01-01 01:05:48 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:48.551852 | orchestrator | 2026-01-01 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:51.589631 | orchestrator | 2026-01-01 01:05:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:51.591231 | orchestrator | 2026-01-01 01:05:51 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:51.593482 | orchestrator | 2026-01-01 01:05:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:51.595910 | orchestrator | 2026-01-01 01:05:51 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:51.596104 | orchestrator | 2026-01-01 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:54.640186 | orchestrator | 2026-01-01 01:05:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:54.642590 | orchestrator | 2026-01-01 01:05:54 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:54.642965 | orchestrator | 2026-01-01 01:05:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:54.644781 | orchestrator | 2026-01-01 01:05:54 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:54.644920 | orchestrator | 2026-01-01 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:05:57.686843 | orchestrator | 2026-01-01 01:05:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:05:57.688989 | orchestrator | 2026-01-01 01:05:57 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:05:57.691236 | orchestrator | 2026-01-01 01:05:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:05:57.693100 | orchestrator | 2026-01-01 01:05:57 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:05:57.693132 | orchestrator | 2026-01-01 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:00.744796 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:00.746424 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:00.748973 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:00.754258 | orchestrator | 2026-01-01 01:06:00 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:00.754507 | orchestrator | 2026-01-01 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:03.793950 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:03.796884 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:03.798862 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:03.799675 | orchestrator | 2026-01-01 01:06:03 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:03.799710 | orchestrator | 2026-01-01 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:06.849489 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:06.851446 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:06.853043 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:06.855029 | orchestrator | 2026-01-01 01:06:06 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:06.855116 | orchestrator | 2026-01-01 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:09.900290 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:09.901994 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:09.904052 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:09.905742 | orchestrator | 2026-01-01 01:06:09 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:09.905784 | orchestrator | 2026-01-01 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:12.973018 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:12.974249 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:12.977024 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:12.980590 | orchestrator | 2026-01-01 01:06:12 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:12.980663 | orchestrator | 2026-01-01 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:16.028965 | orchestrator | 2026-01-01 01:06:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:16.029068 | orchestrator | 2026-01-01 01:06:16 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:16.029085 | orchestrator | 2026-01-01 01:06:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:16.029097 | orchestrator | 2026-01-01 01:06:16 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:16.029108 | orchestrator | 2026-01-01 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:19.077551 | orchestrator | 2026-01-01 01:06:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:19.080147 | orchestrator | 2026-01-01 01:06:19 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:19.082874 | orchestrator | 2026-01-01 01:06:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:19.086720 | orchestrator | 2026-01-01 01:06:19 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:19.087010 | orchestrator | 2026-01-01 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:22.131533 | orchestrator | 2026-01-01 01:06:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:22.133859 | orchestrator | 2026-01-01 01:06:22 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:22.135751 | orchestrator | 2026-01-01 01:06:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:22.138149 | orchestrator | 2026-01-01 01:06:22 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:22.138176 | orchestrator | 2026-01-01 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:25.189115 | orchestrator | 2026-01-01 01:06:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:25.191517 | orchestrator | 2026-01-01 01:06:25 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:25.193566 | orchestrator | 2026-01-01 01:06:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:25.196453 | orchestrator | 2026-01-01 01:06:25 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:25.196478 | orchestrator | 2026-01-01 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:28.242109 | orchestrator | 2026-01-01 01:06:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:28.243461 | orchestrator | 2026-01-01 01:06:28 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:28.246657 | orchestrator | 2026-01-01 01:06:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:28.250288 | orchestrator | 2026-01-01 01:06:28 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:28.250390 | orchestrator | 2026-01-01 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:31.298986 | orchestrator | 2026-01-01 01:06:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:31.301980 | orchestrator | 2026-01-01 01:06:31 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:31.303980 | orchestrator | 2026-01-01 01:06:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:31.306150 | orchestrator | 2026-01-01 01:06:31 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:31.306191 | orchestrator | 2026-01-01 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:34.344162 | orchestrator | 2026-01-01 01:06:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:34.346077 | orchestrator | 2026-01-01 01:06:34 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:34.348045 | orchestrator | 2026-01-01 01:06:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:34.353872 | orchestrator | 2026-01-01 01:06:34 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:34.353923 | orchestrator | 2026-01-01 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:37.410506 | orchestrator | 2026-01-01 01:06:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:37.413185 | orchestrator | 2026-01-01 01:06:37 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:37.415802 | orchestrator | 2026-01-01 01:06:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:37.418799 | orchestrator | 2026-01-01 01:06:37 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:37.418867 | orchestrator | 2026-01-01 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:40.471134 | orchestrator | 2026-01-01 01:06:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:40.473220 | orchestrator | 2026-01-01 01:06:40 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:40.475366 | orchestrator | 2026-01-01 01:06:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:40.478858 | orchestrator | 2026-01-01 01:06:40 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:40.478918 | orchestrator | 2026-01-01 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:43.533340 | orchestrator | 2026-01-01 01:06:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:43.535621 | orchestrator | 2026-01-01 01:06:43 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:43.538426 | orchestrator | 2026-01-01 01:06:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:43.540680 | orchestrator | 2026-01-01 01:06:43 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:43.540736 | orchestrator | 2026-01-01 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:46.592102 | orchestrator | 2026-01-01 01:06:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:46.594723 | orchestrator | 2026-01-01 01:06:46 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:46.597358 | orchestrator | 2026-01-01 01:06:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:46.599783 | orchestrator | 2026-01-01 01:06:46 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:46.599826 | orchestrator | 2026-01-01 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:49.651088 | orchestrator | 2026-01-01 01:06:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:49.652661 | orchestrator | 2026-01-01 01:06:49 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:49.654968 | orchestrator | 2026-01-01 01:06:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:49.657686 | orchestrator | 2026-01-01 01:06:49 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:49.657774 | orchestrator | 2026-01-01 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:52.710668 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:52.712987 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:52.714841 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:52.716937 | orchestrator | 2026-01-01 01:06:52 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:52.716983 | orchestrator | 2026-01-01 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:55.763537 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:55.765649 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:55.767778 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:55.769622 | orchestrator | 2026-01-01 01:06:55 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:55.769726 | orchestrator | 2026-01-01 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:06:58.811647 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:06:58.812909 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:06:58.815207 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:06:58.816804 | orchestrator | 2026-01-01 01:06:58 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:06:58.816836 | orchestrator | 2026-01-01 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:01.877050 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:01.877529 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:01.878917 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:01.880136 | orchestrator | 2026-01-01 01:07:01 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:07:01.880660 | orchestrator | 2026-01-01 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:04.930336 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:04.932053 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:04.934143 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:04.936027 | orchestrator | 2026-01-01 01:07:04 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:07:04.936072 | orchestrator | 2026-01-01 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:07.990360 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:07.991944 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:07.995138 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:07.998338 | orchestrator | 2026-01-01 01:07:07 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:07:07.998444 | orchestrator | 2026-01-01 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:11.033529 | orchestrator | 2026-01-01 01:07:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:11.034714 | orchestrator | 2026-01-01 01:07:11 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:11.036568 | orchestrator | 2026-01-01 01:07:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:11.037666 | orchestrator | 2026-01-01 01:07:11 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:07:11.037887 | orchestrator | 2026-01-01 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:14.082494 | orchestrator | 2026-01-01 01:07:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:14.084057 | orchestrator | 2026-01-01 01:07:14 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:14.086355 | orchestrator | 2026-01-01 01:07:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:14.088896 | orchestrator | 2026-01-01 01:07:14 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state STARTED 2026-01-01 01:07:14.089039 | orchestrator | 2026-01-01 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:17.135441 | orchestrator | 2026-01-01 01:07:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:17.137881 | orchestrator | 2026-01-01 01:07:17 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:17.140559 | orchestrator | 2026-01-01 01:07:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:17.141999 | orchestrator | 2026-01-01 01:07:17 | INFO  | Task 538afc4e-4ac2-4cda-b18a-67fc5e410b15 is in state SUCCESS 2026-01-01 01:07:17.142081 | orchestrator | 2026-01-01 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:20.186470 | orchestrator | 2026-01-01 01:07:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:20.188140 | orchestrator | 2026-01-01 01:07:20 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:20.190305 | orchestrator | 2026-01-01 01:07:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:20.190354 | orchestrator | 2026-01-01 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:23.234168 | orchestrator | 2026-01-01 01:07:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:23.235379 | orchestrator | 2026-01-01 01:07:23 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:23.237138 | orchestrator | 2026-01-01 01:07:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:23.237174 | orchestrator | 2026-01-01 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:26.275729 | orchestrator | 2026-01-01 01:07:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:26.275923 | orchestrator | 2026-01-01 01:07:26 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:26.276412 | orchestrator | 2026-01-01 01:07:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:26.276975 | orchestrator | 2026-01-01 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:29.312510 | orchestrator | 2026-01-01 01:07:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:29.312983 | orchestrator | 2026-01-01 01:07:29 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:29.314047 | orchestrator | 2026-01-01 01:07:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:29.314074 | orchestrator | 2026-01-01 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:32.356969 | orchestrator | 2026-01-01 01:07:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:32.358587 | orchestrator | 2026-01-01 01:07:32 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:32.360776 | orchestrator | 2026-01-01 01:07:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:32.360851 | orchestrator | 2026-01-01 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:35.407696 | orchestrator | 2026-01-01 01:07:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:35.410598 | orchestrator | 2026-01-01 01:07:35 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:35.412931 | orchestrator | 2026-01-01 01:07:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:35.412968 | orchestrator | 2026-01-01 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:38.458438 | orchestrator | 2026-01-01 01:07:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:38.459428 | orchestrator | 2026-01-01 01:07:38 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:38.460796 | orchestrator | 2026-01-01 01:07:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:38.460823 | orchestrator | 2026-01-01 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:41.503257 | orchestrator | 2026-01-01 01:07:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:41.505077 | orchestrator | 2026-01-01 01:07:41 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:41.506962 | orchestrator | 2026-01-01 01:07:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:41.506992 | orchestrator | 2026-01-01 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:44.554951 | orchestrator | 2026-01-01 01:07:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:44.555862 | orchestrator | 2026-01-01 01:07:44 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:44.557793 | orchestrator | 2026-01-01 01:07:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:44.557833 | orchestrator | 2026-01-01 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:47.599374 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:47.602001 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:47.603921 | orchestrator | 2026-01-01 01:07:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:47.603972 | orchestrator | 2026-01-01 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:50.648064 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:50.649624 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:50.651705 | orchestrator | 2026-01-01 01:07:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:50.651755 | orchestrator | 2026-01-01 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:53.689304 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:53.692643 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:53.695466 | orchestrator | 2026-01-01 01:07:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:53.695608 | orchestrator | 2026-01-01 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:56.743306 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:56.746087 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state STARTED 2026-01-01 01:07:56.749054 | orchestrator | 2026-01-01 01:07:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:56.749101 | orchestrator | 2026-01-01 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:07:59.802879 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:07:59.806796 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task a0269e98-bb90-42dc-b3b5-ad10f32fc242 is in state SUCCESS 2026-01-01 01:07:59.808720 | orchestrator | 2026-01-01 01:07:59.808804 | orchestrator | 2026-01-01 01:07:59.808814 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-01 01:07:59.808821 | orchestrator | 2026-01-01 01:07:59.808827 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-01 01:07:59.808833 | orchestrator | Thursday 01 January 2026 01:00:29 +0000 (0:00:00.103) 0:00:00.103 ****** 2026-01-01 01:07:59.808839 | orchestrator | changed: [localhost] 2026-01-01 01:07:59.808845 | orchestrator | 2026-01-01 01:07:59.808851 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-01 01:07:59.808857 | orchestrator | Thursday 01 January 2026 01:00:30 +0000 (0:00:01.127) 0:00:01.230 ****** 2026-01-01 01:07:59.808862 | orchestrator | 2026-01-01 01:07:59.808868 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808873 | orchestrator | 2026-01-01 01:07:59.808879 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808928 | orchestrator | 2026-01-01 01:07:59.808935 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808941 | orchestrator | 2026-01-01 01:07:59.808947 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808952 | orchestrator | 2026-01-01 01:07:59.808958 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808963 | orchestrator | 2026-01-01 01:07:59.808969 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808974 | orchestrator | 2026-01-01 01:07:59.808980 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-01-01 01:07:59.808985 | orchestrator | changed: [localhost] 2026-01-01 01:07:59.808991 | orchestrator | 2026-01-01 01:07:59.808996 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-01 01:07:59.809002 | orchestrator | Thursday 01 January 2026 01:06:18 +0000 (0:05:47.244) 0:05:48.475 ****** 2026-01-01 01:07:59.809007 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-01-01 01:07:59.809013 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (2 retries left). 2026-01-01 01:07:59.809018 | orchestrator | changed: [localhost] 2026-01-01 01:07:59.809024 | orchestrator | 2026-01-01 01:07:59.809124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:07:59.809130 | orchestrator | 2026-01-01 01:07:59.809136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:07:59.809141 | orchestrator | Thursday 01 January 2026 01:07:14 +0000 (0:00:55.926) 0:06:44.401 ****** 2026-01-01 01:07:59.809147 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:07:59.809152 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:07:59.809158 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:07:59.809163 | orchestrator | 2026-01-01 01:07:59.809168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:07:59.809202 | orchestrator | Thursday 01 January 2026 01:07:14 +0000 (0:00:00.321) 0:06:44.723 ****** 2026-01-01 01:07:59.809211 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-01 01:07:59.809219 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-01 01:07:59.809228 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-01 01:07:59.809250 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-01 01:07:59.809260 | orchestrator | 2026-01-01 01:07:59.809269 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-01 01:07:59.809276 | orchestrator | skipping: no hosts matched 2026-01-01 01:07:59.809283 | orchestrator | 2026-01-01 01:07:59.809290 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:07:59.809296 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:07:59.809306 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:07:59.809314 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:07:59.809321 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-01 01:07:59.809327 | orchestrator | 2026-01-01 01:07:59.809333 | orchestrator | 2026-01-01 01:07:59.809340 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:07:59.809346 | orchestrator | Thursday 01 January 2026 01:07:15 +0000 (0:00:00.669) 0:06:45.392 ****** 2026-01-01 01:07:59.809353 | orchestrator | =============================================================================== 2026-01-01 01:07:59.809359 | orchestrator | Download ironic-agent initramfs --------------------------------------- 347.24s 2026-01-01 01:07:59.809366 | orchestrator | Download ironic-agent kernel ------------------------------------------- 55.93s 2026-01-01 01:07:59.809377 | orchestrator | Ensure the destination directory exists --------------------------------- 1.13s 2026-01-01 01:07:59.809384 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-01-01 01:07:59.809390 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-01 01:07:59.809397 | orchestrator | 2026-01-01 01:07:59.809403 | orchestrator | 2026-01-01 01:07:59.809409 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-01 01:07:59.809416 | orchestrator | 2026-01-01 01:07:59.809422 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-01 01:07:59.809428 | orchestrator | Thursday 01 January 2026 01:05:30 +0000 (0:00:00.264) 0:00:00.264 ****** 2026-01-01 01:07:59.809435 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:07:59.809441 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:07:59.809448 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:07:59.809454 | orchestrator | 2026-01-01 01:07:59.809460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-01 01:07:59.809477 | orchestrator | Thursday 01 January 2026 01:05:30 +0000 (0:00:00.341) 0:00:00.605 ****** 2026-01-01 01:07:59.809484 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-01 01:07:59.809491 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-01 01:07:59.809497 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-01 01:07:59.809503 | orchestrator | 2026-01-01 01:07:59.809509 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-01 01:07:59.809516 | orchestrator | 2026-01-01 01:07:59.809563 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-01 01:07:59.809571 | orchestrator | Thursday 01 January 2026 01:05:30 +0000 (0:00:00.473) 0:00:01.079 ****** 2026-01-01 01:07:59.809577 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:07:59.809584 | orchestrator | 2026-01-01 01:07:59.809590 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-01 01:07:59.809597 | orchestrator | Thursday 01 January 2026 01:05:31 +0000 (0:00:00.595) 0:00:01.674 ****** 2026-01-01 01:07:59.809609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809661 | orchestrator | 2026-01-01 01:07:59.809667 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-01 01:07:59.809673 | orchestrator | Thursday 01 January 2026 01:05:32 +0000 (0:00:00.856) 0:00:02.531 ****** 2026-01-01 01:07:59.809678 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-01 01:07:59.809684 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-01 01:07:59.809689 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:07:59.809695 | orchestrator | 2026-01-01 01:07:59.809700 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-01 01:07:59.809706 | orchestrator | Thursday 01 January 2026 01:05:33 +0000 (0:00:00.902) 0:00:03.433 ****** 2026-01-01 01:07:59.809711 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-01 01:07:59.809716 | orchestrator | 2026-01-01 01:07:59.809722 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-01 01:07:59.809727 | orchestrator | Thursday 01 January 2026 01:05:34 +0000 (0:00:00.788) 0:00:04.221 ****** 2026-01-01 01:07:59.809738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809756 | orchestrator | 2026-01-01 01:07:59.809761 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-01 01:07:59.809767 | orchestrator | Thursday 01 January 2026 01:05:35 +0000 (0:00:01.378) 0:00:05.599 ****** 2026-01-01 01:07:59.809775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 01:07:59.809786 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.809792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 01:07:59.809798 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.809803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 01:07:59.809809 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.809814 | orchestrator | 2026-01-01 01:07:59.809820 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-01 01:07:59.809825 | orchestrator | Thursday 01 January 2026 01:05:35 +0000 (0:00:00.374) 0:00:05.974 ****** 2026-01-01 01:07:59.809835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 01:07:59.809841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 01:07:59.809847 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.809852 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.809858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-01 01:07:59.809868 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.809874 | orchestrator | 2026-01-01 01:07:59.809879 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-01 01:07:59.809885 | orchestrator | Thursday 01 January 2026 01:05:36 +0000 (0:00:00.854) 0:00:06.829 ****** 2026-01-01 01:07:59.809893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809914 | orchestrator | 2026-01-01 01:07:59.809920 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-01 01:07:59.809925 | orchestrator | Thursday 01 January 2026 01:05:38 +0000 (0:00:01.421) 0:00:08.250 ****** 2026-01-01 01:07:59.809931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.809958 | orchestrator | 2026-01-01 01:07:59.809966 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-01 01:07:59.809972 | orchestrator | Thursday 01 January 2026 01:05:39 +0000 (0:00:01.482) 0:00:09.733 ****** 2026-01-01 01:07:59.809977 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.809983 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.809988 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.809993 | orchestrator | 2026-01-01 01:07:59.809999 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-01 01:07:59.810004 | orchestrator | Thursday 01 January 2026 01:05:40 +0000 (0:00:00.530) 0:00:10.263 ****** 2026-01-01 01:07:59.810010 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-01 01:07:59.810060 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-01 01:07:59.810067 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-01 01:07:59.810072 | orchestrator | 2026-01-01 01:07:59.810078 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-01 01:07:59.810083 | orchestrator | Thursday 01 January 2026 01:05:41 +0000 (0:00:01.316) 0:00:11.580 ****** 2026-01-01 01:07:59.810088 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-01 01:07:59.810094 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-01 01:07:59.810100 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-01 01:07:59.810105 | orchestrator | 2026-01-01 01:07:59.810111 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-01 01:07:59.810116 | orchestrator | Thursday 01 January 2026 01:05:42 +0000 (0:00:01.211) 0:00:12.792 ****** 2026-01-01 01:07:59.810121 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-01 01:07:59.810127 | orchestrator | 2026-01-01 01:07:59.810132 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-01 01:07:59.810138 | orchestrator | Thursday 01 January 2026 01:05:43 +0000 (0:00:00.687) 0:00:13.480 ****** 2026-01-01 01:07:59.810143 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-01 01:07:59.810148 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-01 01:07:59.810154 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:07:59.810159 | orchestrator | ok: [testbed-node-1] 2026-01-01 01:07:59.810165 | orchestrator | ok: [testbed-node-2] 2026-01-01 01:07:59.810170 | orchestrator | 2026-01-01 01:07:59.810218 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-01 01:07:59.810226 | orchestrator | Thursday 01 January 2026 01:05:44 +0000 (0:00:00.687) 0:00:14.167 ****** 2026-01-01 01:07:59.810236 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.810242 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.810247 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.810253 | orchestrator | 2026-01-01 01:07:59.810258 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-01 01:07:59.810263 | orchestrator | Thursday 01 January 2026 01:05:44 +0000 (0:00:00.445) 0:00:14.613 ****** 2026-01-01 01:07:59.810297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1325640, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.8503008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1325640, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.8503008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1325640, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.8503008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326883, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1453612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326883, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1453612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326883, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1453612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326842, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1331966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326842, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1331966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1326842, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1331966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326888, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1476662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326888, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1476662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326888, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1476662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1326857, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1391966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1326857, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1391966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1326857, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1391966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1326874, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1434422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1326874, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1434422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1326874, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1434422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1325639, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.8491983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1325639, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.8491983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1325639, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.8491983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1325644, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.9431622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1325644, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.9431622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1325644, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.9431622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326844, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1341965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326844, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1341965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1326844, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1341965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1326860, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1401966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1326860, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1401966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1326860, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1401966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326881, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1445632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326881, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1445632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326881, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1445632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326078, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.944648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326078, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.944648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1326078, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226575.944648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326868, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.142816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326868, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.142816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326868, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.142816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1326858, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1391966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1326858, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1391966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1326858, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1391966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1326852, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1381965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1326852, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1381965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1326852, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1381965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1326849, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1365016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1326849, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1365016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1326849, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1365016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1326863, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1415465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1326863, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1415465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1326863, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1415465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1326845, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1358693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1326845, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1358693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1326845, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1358693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326877, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1444101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326877, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1444101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326877, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1444101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327062, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1884127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327062, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1884127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327062, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1884127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326932, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.161713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326932, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.161713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326932, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.161713, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326913, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1493514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326913, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1493514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326913, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1493514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1326961, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1647155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1326961, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1647155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1326961, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1647155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1326903, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1479626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1326903, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1479626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1326903, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1479626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327013, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1771967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327013, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1771967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327013, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1771967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326966, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1741967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326966, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1741967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326966, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1741967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327018, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1784291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.810998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327018, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1784291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327018, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1784291, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327052, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1872873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327052, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1872873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327052, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1872873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327008, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1765578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327008, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1765578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327008, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1765578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326957, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.163704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326957, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.163704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326957, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.163704, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326926, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1540396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326926, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1540396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326926, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1540396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326951, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1631856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326951, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1631856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326951, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1631856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326915, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1519365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326915, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1519365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326915, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1519365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1326959, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1642914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1326959, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1642914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1326959, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1642914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327033, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1862946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327033, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1862946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327033, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1862946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327030, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.181197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327030, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.181197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327030, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.181197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326905, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1487405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326905, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1487405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326905, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1487405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326910, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1493514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326910, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1493514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326910, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1493514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327004, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1757057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327004, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1757057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327004, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1757057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327020, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1790168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327020, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1790168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327020, 'dev': 124, 'nlink': 1, 'atime': 1767225780.0, 'mtime': 1767225780.0, 'ctime': 1767226576.1790168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-01 01:07:59.811391 | orchestrator | 2026-01-01 01:07:59.811397 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-01 01:07:59.811403 | orchestrator | Thursday 01 January 2026 01:06:22 +0000 (0:00:37.918) 0:00:52.532 ****** 2026-01-01 01:07:59.811409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.811418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.811424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-01 01:07:59.811430 | orchestrator | 2026-01-01 01:07:59.811436 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-01 01:07:59.811441 | orchestrator | Thursday 01 January 2026 01:06:23 +0000 (0:00:00.950) 0:00:53.482 ****** 2026-01-01 01:07:59.811446 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:07:59.811452 | orchestrator | 2026-01-01 01:07:59.811458 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-01 01:07:59.811463 | orchestrator | Thursday 01 January 2026 01:06:25 +0000 (0:00:02.416) 0:00:55.899 ****** 2026-01-01 01:07:59.811468 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:07:59.811474 | orchestrator | 2026-01-01 01:07:59.811479 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-01 01:07:59.811485 | orchestrator | Thursday 01 January 2026 01:06:28 +0000 (0:00:02.445) 0:00:58.344 ****** 2026-01-01 01:07:59.811490 | orchestrator | 2026-01-01 01:07:59.811496 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-01 01:07:59.811501 | orchestrator | Thursday 01 January 2026 01:06:28 +0000 (0:00:00.084) 0:00:58.429 ****** 2026-01-01 01:07:59.811506 | orchestrator | 2026-01-01 01:07:59.811512 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-01 01:07:59.811517 | orchestrator | Thursday 01 January 2026 01:06:28 +0000 (0:00:00.062) 0:00:58.491 ****** 2026-01-01 01:07:59.811522 | orchestrator | 2026-01-01 01:07:59.811528 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-01 01:07:59.811537 | orchestrator | Thursday 01 January 2026 01:06:28 +0000 (0:00:00.234) 0:00:58.726 ****** 2026-01-01 01:07:59.811543 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.811548 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.811557 | orchestrator | changed: [testbed-node-0] 2026-01-01 01:07:59.811563 | orchestrator | 2026-01-01 01:07:59.811568 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-01 01:07:59.811574 | orchestrator | Thursday 01 January 2026 01:06:30 +0000 (0:00:01.771) 0:01:00.498 ****** 2026-01-01 01:07:59.811579 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.811585 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.811590 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-01 01:07:59.811596 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-01 01:07:59.811601 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-01 01:07:59.811607 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-01 01:07:59.811613 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:07:59.811618 | orchestrator | 2026-01-01 01:07:59.811624 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-01 01:07:59.811629 | orchestrator | Thursday 01 January 2026 01:07:21 +0000 (0:00:51.492) 0:01:51.990 ****** 2026-01-01 01:07:59.811635 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.811640 | orchestrator | changed: [testbed-node-2] 2026-01-01 01:07:59.811646 | orchestrator | changed: [testbed-node-1] 2026-01-01 01:07:59.811651 | orchestrator | 2026-01-01 01:07:59.811657 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-01 01:07:59.811662 | orchestrator | Thursday 01 January 2026 01:07:51 +0000 (0:00:29.583) 0:02:21.574 ****** 2026-01-01 01:07:59.811668 | orchestrator | ok: [testbed-node-0] 2026-01-01 01:07:59.811673 | orchestrator | 2026-01-01 01:07:59.811679 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-01 01:07:59.811684 | orchestrator | Thursday 01 January 2026 01:07:53 +0000 (0:00:02.249) 0:02:23.823 ****** 2026-01-01 01:07:59.811689 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.811695 | orchestrator | skipping: [testbed-node-1] 2026-01-01 01:07:59.811700 | orchestrator | skipping: [testbed-node-2] 2026-01-01 01:07:59.811706 | orchestrator | 2026-01-01 01:07:59.811711 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-01 01:07:59.811717 | orchestrator | Thursday 01 January 2026 01:07:54 +0000 (0:00:00.424) 0:02:24.247 ****** 2026-01-01 01:07:59.811723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-01 01:07:59.811732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-01 01:07:59.811738 | orchestrator | 2026-01-01 01:07:59.811743 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-01 01:07:59.811749 | orchestrator | Thursday 01 January 2026 01:07:56 +0000 (0:00:02.460) 0:02:26.708 ****** 2026-01-01 01:07:59.811755 | orchestrator | skipping: [testbed-node-0] 2026-01-01 01:07:59.811760 | orchestrator | 2026-01-01 01:07:59.811766 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-01 01:07:59.811771 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 01:07:59.811784 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 01:07:59.811789 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-01 01:07:59.811795 | orchestrator | 2026-01-01 01:07:59.811800 | orchestrator | 2026-01-01 01:07:59.811806 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-01 01:07:59.811811 | orchestrator | Thursday 01 January 2026 01:07:56 +0000 (0:00:00.282) 0:02:26.991 ****** 2026-01-01 01:07:59.811817 | orchestrator | =============================================================================== 2026-01-01 01:07:59.811822 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.49s 2026-01-01 01:07:59.811828 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.92s 2026-01-01 01:07:59.811833 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.58s 2026-01-01 01:07:59.811838 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.46s 2026-01-01 01:07:59.811844 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.45s 2026-01-01 01:07:59.811849 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.42s 2026-01-01 01:07:59.811855 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2026-01-01 01:07:59.811860 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.77s 2026-01-01 01:07:59.811866 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.48s 2026-01-01 01:07:59.811874 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.42s 2026-01-01 01:07:59.811879 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2026-01-01 01:07:59.811885 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2026-01-01 01:07:59.811890 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2026-01-01 01:07:59.811895 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.95s 2026-01-01 01:07:59.811901 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-01-01 01:07:59.811906 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.86s 2026-01-01 01:07:59.811911 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.85s 2026-01-01 01:07:59.811917 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2026-01-01 01:07:59.811922 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.69s 2026-01-01 01:07:59.811928 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-01-01 01:07:59.811933 | orchestrator | 2026-01-01 01:07:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:07:59.811939 | orchestrator | 2026-01-01 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:02.852608 | orchestrator | 2026-01-01 01:08:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:02.857804 | orchestrator | 2026-01-01 01:08:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:02.858345 | orchestrator | 2026-01-01 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:05.908805 | orchestrator | 2026-01-01 01:08:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:05.911553 | orchestrator | 2026-01-01 01:08:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:05.911605 | orchestrator | 2026-01-01 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:08.965540 | orchestrator | 2026-01-01 01:08:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:08.966475 | orchestrator | 2026-01-01 01:08:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:08.966594 | orchestrator | 2026-01-01 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:12.007006 | orchestrator | 2026-01-01 01:08:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:12.008924 | orchestrator | 2026-01-01 01:08:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:12.008972 | orchestrator | 2026-01-01 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:15.053739 | orchestrator | 2026-01-01 01:08:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:15.056129 | orchestrator | 2026-01-01 01:08:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:15.056870 | orchestrator | 2026-01-01 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:18.100507 | orchestrator | 2026-01-01 01:08:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:18.102260 | orchestrator | 2026-01-01 01:08:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:18.102286 | orchestrator | 2026-01-01 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:21.151988 | orchestrator | 2026-01-01 01:08:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:21.153733 | orchestrator | 2026-01-01 01:08:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:21.153978 | orchestrator | 2026-01-01 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:24.210599 | orchestrator | 2026-01-01 01:08:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:24.212778 | orchestrator | 2026-01-01 01:08:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:24.212872 | orchestrator | 2026-01-01 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:27.256403 | orchestrator | 2026-01-01 01:08:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:27.258454 | orchestrator | 2026-01-01 01:08:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:27.258526 | orchestrator | 2026-01-01 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:30.299330 | orchestrator | 2026-01-01 01:08:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:30.302146 | orchestrator | 2026-01-01 01:08:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:30.302205 | orchestrator | 2026-01-01 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:33.352987 | orchestrator | 2026-01-01 01:08:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:33.354676 | orchestrator | 2026-01-01 01:08:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:33.354704 | orchestrator | 2026-01-01 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:36.400057 | orchestrator | 2026-01-01 01:08:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:36.402529 | orchestrator | 2026-01-01 01:08:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:36.402594 | orchestrator | 2026-01-01 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:39.448451 | orchestrator | 2026-01-01 01:08:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:39.450187 | orchestrator | 2026-01-01 01:08:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:39.450209 | orchestrator | 2026-01-01 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:42.487103 | orchestrator | 2026-01-01 01:08:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:42.488640 | orchestrator | 2026-01-01 01:08:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:42.488893 | orchestrator | 2026-01-01 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:45.540924 | orchestrator | 2026-01-01 01:08:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:45.541743 | orchestrator | 2026-01-01 01:08:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:45.541775 | orchestrator | 2026-01-01 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:48.591191 | orchestrator | 2026-01-01 01:08:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:48.592861 | orchestrator | 2026-01-01 01:08:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:48.592893 | orchestrator | 2026-01-01 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:51.641148 | orchestrator | 2026-01-01 01:08:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:51.643074 | orchestrator | 2026-01-01 01:08:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:51.643148 | orchestrator | 2026-01-01 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:54.688035 | orchestrator | 2026-01-01 01:08:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:54.690166 | orchestrator | 2026-01-01 01:08:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:54.690214 | orchestrator | 2026-01-01 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:08:57.737457 | orchestrator | 2026-01-01 01:08:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:08:57.737937 | orchestrator | 2026-01-01 01:08:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:08:57.738163 | orchestrator | 2026-01-01 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:00.781337 | orchestrator | 2026-01-01 01:09:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:00.783003 | orchestrator | 2026-01-01 01:09:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:00.783055 | orchestrator | 2026-01-01 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:03.819350 | orchestrator | 2026-01-01 01:09:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:03.822501 | orchestrator | 2026-01-01 01:09:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:03.822800 | orchestrator | 2026-01-01 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:06.868485 | orchestrator | 2026-01-01 01:09:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:06.871274 | orchestrator | 2026-01-01 01:09:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:06.871514 | orchestrator | 2026-01-01 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:09.910251 | orchestrator | 2026-01-01 01:09:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:09.912044 | orchestrator | 2026-01-01 01:09:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:09.912086 | orchestrator | 2026-01-01 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:12.956122 | orchestrator | 2026-01-01 01:09:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:12.957639 | orchestrator | 2026-01-01 01:09:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:12.957675 | orchestrator | 2026-01-01 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:16.028209 | orchestrator | 2026-01-01 01:09:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:16.029835 | orchestrator | 2026-01-01 01:09:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:16.030107 | orchestrator | 2026-01-01 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:19.077799 | orchestrator | 2026-01-01 01:09:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:19.078717 | orchestrator | 2026-01-01 01:09:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:19.078734 | orchestrator | 2026-01-01 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:22.126803 | orchestrator | 2026-01-01 01:09:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:22.127527 | orchestrator | 2026-01-01 01:09:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:22.127544 | orchestrator | 2026-01-01 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:25.179546 | orchestrator | 2026-01-01 01:09:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:25.181931 | orchestrator | 2026-01-01 01:09:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:25.182269 | orchestrator | 2026-01-01 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:28.234760 | orchestrator | 2026-01-01 01:09:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:28.235382 | orchestrator | 2026-01-01 01:09:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:28.235417 | orchestrator | 2026-01-01 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:31.288522 | orchestrator | 2026-01-01 01:09:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:31.288856 | orchestrator | 2026-01-01 01:09:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:31.288889 | orchestrator | 2026-01-01 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:34.334316 | orchestrator | 2026-01-01 01:09:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:34.335365 | orchestrator | 2026-01-01 01:09:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:34.335420 | orchestrator | 2026-01-01 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:37.376429 | orchestrator | 2026-01-01 01:09:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:37.378950 | orchestrator | 2026-01-01 01:09:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:37.379008 | orchestrator | 2026-01-01 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:40.426716 | orchestrator | 2026-01-01 01:09:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:40.427762 | orchestrator | 2026-01-01 01:09:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:40.428003 | orchestrator | 2026-01-01 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:43.480344 | orchestrator | 2026-01-01 01:09:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:43.482417 | orchestrator | 2026-01-01 01:09:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:43.482500 | orchestrator | 2026-01-01 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:46.532968 | orchestrator | 2026-01-01 01:09:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:46.534889 | orchestrator | 2026-01-01 01:09:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:46.534975 | orchestrator | 2026-01-01 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:49.576495 | orchestrator | 2026-01-01 01:09:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:49.577927 | orchestrator | 2026-01-01 01:09:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:49.577960 | orchestrator | 2026-01-01 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:52.626928 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:52.628590 | orchestrator | 2026-01-01 01:09:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:52.628733 | orchestrator | 2026-01-01 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:55.667881 | orchestrator | 2026-01-01 01:09:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:55.669291 | orchestrator | 2026-01-01 01:09:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:55.669628 | orchestrator | 2026-01-01 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:09:58.721478 | orchestrator | 2026-01-01 01:09:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:09:58.722967 | orchestrator | 2026-01-01 01:09:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:09:58.723153 | orchestrator | 2026-01-01 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:01.770375 | orchestrator | 2026-01-01 01:10:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:01.772134 | orchestrator | 2026-01-01 01:10:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:01.772299 | orchestrator | 2026-01-01 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:04.819588 | orchestrator | 2026-01-01 01:10:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:04.821777 | orchestrator | 2026-01-01 01:10:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:04.821877 | orchestrator | 2026-01-01 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:07.877921 | orchestrator | 2026-01-01 01:10:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:07.880771 | orchestrator | 2026-01-01 01:10:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:07.880789 | orchestrator | 2026-01-01 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:10.923717 | orchestrator | 2026-01-01 01:10:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:10.924045 | orchestrator | 2026-01-01 01:10:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:10.924084 | orchestrator | 2026-01-01 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:13.969118 | orchestrator | 2026-01-01 01:10:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:13.971543 | orchestrator | 2026-01-01 01:10:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:13.971696 | orchestrator | 2026-01-01 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:17.030014 | orchestrator | 2026-01-01 01:10:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:17.032674 | orchestrator | 2026-01-01 01:10:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:17.032728 | orchestrator | 2026-01-01 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:20.081916 | orchestrator | 2026-01-01 01:10:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:20.082588 | orchestrator | 2026-01-01 01:10:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:20.082622 | orchestrator | 2026-01-01 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:23.132327 | orchestrator | 2026-01-01 01:10:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:23.133260 | orchestrator | 2026-01-01 01:10:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:23.133282 | orchestrator | 2026-01-01 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:26.184444 | orchestrator | 2026-01-01 01:10:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:26.185954 | orchestrator | 2026-01-01 01:10:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:26.186012 | orchestrator | 2026-01-01 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:29.235027 | orchestrator | 2026-01-01 01:10:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:29.236468 | orchestrator | 2026-01-01 01:10:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:29.236541 | orchestrator | 2026-01-01 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:32.285201 | orchestrator | 2026-01-01 01:10:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:32.286711 | orchestrator | 2026-01-01 01:10:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:32.286898 | orchestrator | 2026-01-01 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:35.337480 | orchestrator | 2026-01-01 01:10:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:35.338627 | orchestrator | 2026-01-01 01:10:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:35.338704 | orchestrator | 2026-01-01 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:38.391747 | orchestrator | 2026-01-01 01:10:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:38.393675 | orchestrator | 2026-01-01 01:10:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:38.393826 | orchestrator | 2026-01-01 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:41.439367 | orchestrator | 2026-01-01 01:10:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:41.442271 | orchestrator | 2026-01-01 01:10:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:41.442349 | orchestrator | 2026-01-01 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:44.494460 | orchestrator | 2026-01-01 01:10:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:44.496582 | orchestrator | 2026-01-01 01:10:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:44.496626 | orchestrator | 2026-01-01 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:47.543773 | orchestrator | 2026-01-01 01:10:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:47.545327 | orchestrator | 2026-01-01 01:10:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:47.545397 | orchestrator | 2026-01-01 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:50.581353 | orchestrator | 2026-01-01 01:10:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:50.583226 | orchestrator | 2026-01-01 01:10:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:50.583303 | orchestrator | 2026-01-01 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:53.631364 | orchestrator | 2026-01-01 01:10:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:53.633417 | orchestrator | 2026-01-01 01:10:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:53.633480 | orchestrator | 2026-01-01 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:56.677355 | orchestrator | 2026-01-01 01:10:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:56.678952 | orchestrator | 2026-01-01 01:10:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:56.679003 | orchestrator | 2026-01-01 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:10:59.728350 | orchestrator | 2026-01-01 01:10:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:10:59.730102 | orchestrator | 2026-01-01 01:10:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:10:59.730134 | orchestrator | 2026-01-01 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:02.776449 | orchestrator | 2026-01-01 01:11:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:02.779231 | orchestrator | 2026-01-01 01:11:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:02.779280 | orchestrator | 2026-01-01 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:05.829566 | orchestrator | 2026-01-01 01:11:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:05.832409 | orchestrator | 2026-01-01 01:11:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:05.832493 | orchestrator | 2026-01-01 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:08.884769 | orchestrator | 2026-01-01 01:11:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:08.887166 | orchestrator | 2026-01-01 01:11:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:08.887332 | orchestrator | 2026-01-01 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:11.938345 | orchestrator | 2026-01-01 01:11:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:11.941465 | orchestrator | 2026-01-01 01:11:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:11.941512 | orchestrator | 2026-01-01 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:14.994533 | orchestrator | 2026-01-01 01:11:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:14.996753 | orchestrator | 2026-01-01 01:11:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:14.996885 | orchestrator | 2026-01-01 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:18.056580 | orchestrator | 2026-01-01 01:11:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:18.059864 | orchestrator | 2026-01-01 01:11:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:18.059940 | orchestrator | 2026-01-01 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:21.112330 | orchestrator | 2026-01-01 01:11:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:21.114013 | orchestrator | 2026-01-01 01:11:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:21.114135 | orchestrator | 2026-01-01 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:24.163196 | orchestrator | 2026-01-01 01:11:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:24.165977 | orchestrator | 2026-01-01 01:11:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:24.166076 | orchestrator | 2026-01-01 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:27.210277 | orchestrator | 2026-01-01 01:11:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:27.212929 | orchestrator | 2026-01-01 01:11:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:27.212972 | orchestrator | 2026-01-01 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:30.255271 | orchestrator | 2026-01-01 01:11:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:30.256200 | orchestrator | 2026-01-01 01:11:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:30.256239 | orchestrator | 2026-01-01 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:33.299770 | orchestrator | 2026-01-01 01:11:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:33.301979 | orchestrator | 2026-01-01 01:11:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:33.302063 | orchestrator | 2026-01-01 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:36.355476 | orchestrator | 2026-01-01 01:11:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:36.356701 | orchestrator | 2026-01-01 01:11:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:36.356738 | orchestrator | 2026-01-01 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:39.406391 | orchestrator | 2026-01-01 01:11:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:39.408402 | orchestrator | 2026-01-01 01:11:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:39.408445 | orchestrator | 2026-01-01 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:42.453764 | orchestrator | 2026-01-01 01:11:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:42.454984 | orchestrator | 2026-01-01 01:11:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:42.455045 | orchestrator | 2026-01-01 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:45.506388 | orchestrator | 2026-01-01 01:11:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:45.507999 | orchestrator | 2026-01-01 01:11:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:45.508052 | orchestrator | 2026-01-01 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:48.555364 | orchestrator | 2026-01-01 01:11:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:48.556709 | orchestrator | 2026-01-01 01:11:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:48.556974 | orchestrator | 2026-01-01 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:51.602453 | orchestrator | 2026-01-01 01:11:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:51.605113 | orchestrator | 2026-01-01 01:11:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:51.606591 | orchestrator | 2026-01-01 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:54.657731 | orchestrator | 2026-01-01 01:11:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:54.659950 | orchestrator | 2026-01-01 01:11:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:54.660097 | orchestrator | 2026-01-01 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:11:57.710114 | orchestrator | 2026-01-01 01:11:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:11:57.712519 | orchestrator | 2026-01-01 01:11:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:11:57.712567 | orchestrator | 2026-01-01 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:00.760032 | orchestrator | 2026-01-01 01:12:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:00.761931 | orchestrator | 2026-01-01 01:12:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:00.761985 | orchestrator | 2026-01-01 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:03.817972 | orchestrator | 2026-01-01 01:12:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:03.819626 | orchestrator | 2026-01-01 01:12:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:03.819736 | orchestrator | 2026-01-01 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:06.866994 | orchestrator | 2026-01-01 01:12:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:06.869472 | orchestrator | 2026-01-01 01:12:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:06.869549 | orchestrator | 2026-01-01 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:09.915873 | orchestrator | 2026-01-01 01:12:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:09.917196 | orchestrator | 2026-01-01 01:12:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:09.917234 | orchestrator | 2026-01-01 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:12.962589 | orchestrator | 2026-01-01 01:12:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:12.966553 | orchestrator | 2026-01-01 01:12:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:12.966975 | orchestrator | 2026-01-01 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:16.015687 | orchestrator | 2026-01-01 01:12:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:16.016959 | orchestrator | 2026-01-01 01:12:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:16.016984 | orchestrator | 2026-01-01 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:19.062620 | orchestrator | 2026-01-01 01:12:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:19.063705 | orchestrator | 2026-01-01 01:12:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:19.063759 | orchestrator | 2026-01-01 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:22.107578 | orchestrator | 2026-01-01 01:12:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:22.109193 | orchestrator | 2026-01-01 01:12:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:22.109244 | orchestrator | 2026-01-01 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:25.158889 | orchestrator | 2026-01-01 01:12:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:25.161291 | orchestrator | 2026-01-01 01:12:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:25.161426 | orchestrator | 2026-01-01 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:28.215700 | orchestrator | 2026-01-01 01:12:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:28.217980 | orchestrator | 2026-01-01 01:12:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:28.218112 | orchestrator | 2026-01-01 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:31.271879 | orchestrator | 2026-01-01 01:12:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:31.274084 | orchestrator | 2026-01-01 01:12:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:31.274204 | orchestrator | 2026-01-01 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:34.333037 | orchestrator | 2026-01-01 01:12:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:34.335322 | orchestrator | 2026-01-01 01:12:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:34.335357 | orchestrator | 2026-01-01 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:37.388640 | orchestrator | 2026-01-01 01:12:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:37.390461 | orchestrator | 2026-01-01 01:12:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:37.390550 | orchestrator | 2026-01-01 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:40.445697 | orchestrator | 2026-01-01 01:12:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:40.446857 | orchestrator | 2026-01-01 01:12:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:40.446902 | orchestrator | 2026-01-01 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:43.493899 | orchestrator | 2026-01-01 01:12:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:43.494252 | orchestrator | 2026-01-01 01:12:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:43.494283 | orchestrator | 2026-01-01 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:46.542924 | orchestrator | 2026-01-01 01:12:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:46.548317 | orchestrator | 2026-01-01 01:12:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:46.548385 | orchestrator | 2026-01-01 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:49.604418 | orchestrator | 2026-01-01 01:12:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:49.606666 | orchestrator | 2026-01-01 01:12:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:49.606930 | orchestrator | 2026-01-01 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:52.657210 | orchestrator | 2026-01-01 01:12:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:52.660762 | orchestrator | 2026-01-01 01:12:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:52.660813 | orchestrator | 2026-01-01 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:55.708981 | orchestrator | 2026-01-01 01:12:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:55.709567 | orchestrator | 2026-01-01 01:12:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:55.709606 | orchestrator | 2026-01-01 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:12:58.755255 | orchestrator | 2026-01-01 01:12:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:12:58.758878 | orchestrator | 2026-01-01 01:12:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:12:58.758926 | orchestrator | 2026-01-01 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:01.802669 | orchestrator | 2026-01-01 01:13:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:01.805576 | orchestrator | 2026-01-01 01:13:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:01.805620 | orchestrator | 2026-01-01 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:04.860029 | orchestrator | 2026-01-01 01:13:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:04.862940 | orchestrator | 2026-01-01 01:13:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:04.862976 | orchestrator | 2026-01-01 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:07.908932 | orchestrator | 2026-01-01 01:13:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:07.910818 | orchestrator | 2026-01-01 01:13:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:07.910983 | orchestrator | 2026-01-01 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:10.957453 | orchestrator | 2026-01-01 01:13:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:10.960677 | orchestrator | 2026-01-01 01:13:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:10.960837 | orchestrator | 2026-01-01 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:14.009261 | orchestrator | 2026-01-01 01:13:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:14.010659 | orchestrator | 2026-01-01 01:13:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:14.010701 | orchestrator | 2026-01-01 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:17.070384 | orchestrator | 2026-01-01 01:13:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:17.072815 | orchestrator | 2026-01-01 01:13:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:17.072871 | orchestrator | 2026-01-01 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:20.128062 | orchestrator | 2026-01-01 01:13:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:20.129807 | orchestrator | 2026-01-01 01:13:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:20.130066 | orchestrator | 2026-01-01 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:23.174739 | orchestrator | 2026-01-01 01:13:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:23.176951 | orchestrator | 2026-01-01 01:13:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:23.177012 | orchestrator | 2026-01-01 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:26.227212 | orchestrator | 2026-01-01 01:13:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:26.230239 | orchestrator | 2026-01-01 01:13:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:26.230272 | orchestrator | 2026-01-01 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:29.278298 | orchestrator | 2026-01-01 01:13:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:29.279736 | orchestrator | 2026-01-01 01:13:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:29.279772 | orchestrator | 2026-01-01 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:32.323902 | orchestrator | 2026-01-01 01:13:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:32.326376 | orchestrator | 2026-01-01 01:13:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:32.326541 | orchestrator | 2026-01-01 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:35.379803 | orchestrator | 2026-01-01 01:13:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:35.381364 | orchestrator | 2026-01-01 01:13:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:35.381559 | orchestrator | 2026-01-01 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:38.434125 | orchestrator | 2026-01-01 01:13:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:38.436172 | orchestrator | 2026-01-01 01:13:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:38.436242 | orchestrator | 2026-01-01 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:41.484014 | orchestrator | 2026-01-01 01:13:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:41.485806 | orchestrator | 2026-01-01 01:13:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:41.485837 | orchestrator | 2026-01-01 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:44.536924 | orchestrator | 2026-01-01 01:13:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:44.539424 | orchestrator | 2026-01-01 01:13:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:44.539475 | orchestrator | 2026-01-01 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:47.586780 | orchestrator | 2026-01-01 01:13:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:47.588350 | orchestrator | 2026-01-01 01:13:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:47.588411 | orchestrator | 2026-01-01 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:50.624862 | orchestrator | 2026-01-01 01:13:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:50.626236 | orchestrator | 2026-01-01 01:13:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:50.626289 | orchestrator | 2026-01-01 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:53.681030 | orchestrator | 2026-01-01 01:13:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:53.682410 | orchestrator | 2026-01-01 01:13:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:53.682489 | orchestrator | 2026-01-01 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:56.730286 | orchestrator | 2026-01-01 01:13:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:56.732024 | orchestrator | 2026-01-01 01:13:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:56.732072 | orchestrator | 2026-01-01 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:13:59.782099 | orchestrator | 2026-01-01 01:13:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:13:59.784850 | orchestrator | 2026-01-01 01:13:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:13:59.784944 | orchestrator | 2026-01-01 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:02.836155 | orchestrator | 2026-01-01 01:14:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:02.839929 | orchestrator | 2026-01-01 01:14:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:02.840014 | orchestrator | 2026-01-01 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:05.896791 | orchestrator | 2026-01-01 01:14:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:05.897883 | orchestrator | 2026-01-01 01:14:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:05.897933 | orchestrator | 2026-01-01 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:08.943618 | orchestrator | 2026-01-01 01:14:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:08.945410 | orchestrator | 2026-01-01 01:14:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:08.945445 | orchestrator | 2026-01-01 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:11.992472 | orchestrator | 2026-01-01 01:14:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:11.994007 | orchestrator | 2026-01-01 01:14:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:11.994254 | orchestrator | 2026-01-01 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:15.045446 | orchestrator | 2026-01-01 01:14:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:15.046589 | orchestrator | 2026-01-01 01:14:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:15.046624 | orchestrator | 2026-01-01 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:18.092759 | orchestrator | 2026-01-01 01:14:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:18.095076 | orchestrator | 2026-01-01 01:14:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:18.095172 | orchestrator | 2026-01-01 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:21.147779 | orchestrator | 2026-01-01 01:14:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:21.149858 | orchestrator | 2026-01-01 01:14:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:21.149992 | orchestrator | 2026-01-01 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:24.192038 | orchestrator | 2026-01-01 01:14:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:24.195022 | orchestrator | 2026-01-01 01:14:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:24.195075 | orchestrator | 2026-01-01 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:27.245000 | orchestrator | 2026-01-01 01:14:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:27.246597 | orchestrator | 2026-01-01 01:14:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:27.246633 | orchestrator | 2026-01-01 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:30.297049 | orchestrator | 2026-01-01 01:14:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:30.298118 | orchestrator | 2026-01-01 01:14:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:30.298167 | orchestrator | 2026-01-01 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:33.342752 | orchestrator | 2026-01-01 01:14:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:33.344906 | orchestrator | 2026-01-01 01:14:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:33.344947 | orchestrator | 2026-01-01 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:36.399635 | orchestrator | 2026-01-01 01:14:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:36.401147 | orchestrator | 2026-01-01 01:14:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:36.401177 | orchestrator | 2026-01-01 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:39.454654 | orchestrator | 2026-01-01 01:14:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:39.456634 | orchestrator | 2026-01-01 01:14:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:39.456732 | orchestrator | 2026-01-01 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:42.511248 | orchestrator | 2026-01-01 01:14:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:42.512522 | orchestrator | 2026-01-01 01:14:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:42.512599 | orchestrator | 2026-01-01 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:45.562540 | orchestrator | 2026-01-01 01:14:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:45.563449 | orchestrator | 2026-01-01 01:14:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:45.563484 | orchestrator | 2026-01-01 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:48.620406 | orchestrator | 2026-01-01 01:14:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:48.622218 | orchestrator | 2026-01-01 01:14:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:48.622364 | orchestrator | 2026-01-01 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:51.677446 | orchestrator | 2026-01-01 01:14:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:51.678533 | orchestrator | 2026-01-01 01:14:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:51.678576 | orchestrator | 2026-01-01 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:54.728484 | orchestrator | 2026-01-01 01:14:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:54.729697 | orchestrator | 2026-01-01 01:14:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:54.729751 | orchestrator | 2026-01-01 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:14:57.788085 | orchestrator | 2026-01-01 01:14:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:14:57.790263 | orchestrator | 2026-01-01 01:14:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:14:57.790308 | orchestrator | 2026-01-01 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:00.843018 | orchestrator | 2026-01-01 01:15:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:00.845688 | orchestrator | 2026-01-01 01:15:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:00.846118 | orchestrator | 2026-01-01 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:03.899617 | orchestrator | 2026-01-01 01:15:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:03.902236 | orchestrator | 2026-01-01 01:15:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:03.902351 | orchestrator | 2026-01-01 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:06.951767 | orchestrator | 2026-01-01 01:15:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:06.953229 | orchestrator | 2026-01-01 01:15:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:06.953326 | orchestrator | 2026-01-01 01:15:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:10.006617 | orchestrator | 2026-01-01 01:15:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:10.010003 | orchestrator | 2026-01-01 01:15:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:10.010068 | orchestrator | 2026-01-01 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:13.054009 | orchestrator | 2026-01-01 01:15:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:13.054593 | orchestrator | 2026-01-01 01:15:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:13.054643 | orchestrator | 2026-01-01 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:16.097787 | orchestrator | 2026-01-01 01:15:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:16.098285 | orchestrator | 2026-01-01 01:15:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:16.098317 | orchestrator | 2026-01-01 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:19.142682 | orchestrator | 2026-01-01 01:15:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:19.144468 | orchestrator | 2026-01-01 01:15:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:19.144588 | orchestrator | 2026-01-01 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:22.192969 | orchestrator | 2026-01-01 01:15:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:22.195107 | orchestrator | 2026-01-01 01:15:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:22.195344 | orchestrator | 2026-01-01 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:25.243483 | orchestrator | 2026-01-01 01:15:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:25.244431 | orchestrator | 2026-01-01 01:15:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:25.244660 | orchestrator | 2026-01-01 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:28.302681 | orchestrator | 2026-01-01 01:15:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:28.304334 | orchestrator | 2026-01-01 01:15:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:28.304379 | orchestrator | 2026-01-01 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:31.338716 | orchestrator | 2026-01-01 01:15:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:31.340783 | orchestrator | 2026-01-01 01:15:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:31.340894 | orchestrator | 2026-01-01 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:34.392658 | orchestrator | 2026-01-01 01:15:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:34.394368 | orchestrator | 2026-01-01 01:15:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:34.394407 | orchestrator | 2026-01-01 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:37.441045 | orchestrator | 2026-01-01 01:15:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:37.444269 | orchestrator | 2026-01-01 01:15:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:37.444306 | orchestrator | 2026-01-01 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:40.491642 | orchestrator | 2026-01-01 01:15:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:40.493268 | orchestrator | 2026-01-01 01:15:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:40.493456 | orchestrator | 2026-01-01 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:43.541028 | orchestrator | 2026-01-01 01:15:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:43.543756 | orchestrator | 2026-01-01 01:15:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:43.543919 | orchestrator | 2026-01-01 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:46.597388 | orchestrator | 2026-01-01 01:15:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:46.598469 | orchestrator | 2026-01-01 01:15:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:46.598643 | orchestrator | 2026-01-01 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:49.651758 | orchestrator | 2026-01-01 01:15:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:49.652209 | orchestrator | 2026-01-01 01:15:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:49.652243 | orchestrator | 2026-01-01 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:52.705333 | orchestrator | 2026-01-01 01:15:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:52.706240 | orchestrator | 2026-01-01 01:15:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:52.706680 | orchestrator | 2026-01-01 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:55.758538 | orchestrator | 2026-01-01 01:15:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:55.761064 | orchestrator | 2026-01-01 01:15:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:55.761125 | orchestrator | 2026-01-01 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:15:58.818460 | orchestrator | 2026-01-01 01:15:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:15:58.819858 | orchestrator | 2026-01-01 01:15:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:15:58.819888 | orchestrator | 2026-01-01 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:01.865152 | orchestrator | 2026-01-01 01:16:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:01.865579 | orchestrator | 2026-01-01 01:16:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:01.865605 | orchestrator | 2026-01-01 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:04.914769 | orchestrator | 2026-01-01 01:16:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:04.916282 | orchestrator | 2026-01-01 01:16:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:04.916354 | orchestrator | 2026-01-01 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:07.961235 | orchestrator | 2026-01-01 01:16:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:07.962408 | orchestrator | 2026-01-01 01:16:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:07.962473 | orchestrator | 2026-01-01 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:11.012972 | orchestrator | 2026-01-01 01:16:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:11.015385 | orchestrator | 2026-01-01 01:16:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:11.015467 | orchestrator | 2026-01-01 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:14.068256 | orchestrator | 2026-01-01 01:16:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:14.069754 | orchestrator | 2026-01-01 01:16:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:14.071028 | orchestrator | 2026-01-01 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:17.116355 | orchestrator | 2026-01-01 01:16:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:17.117980 | orchestrator | 2026-01-01 01:16:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:17.118097 | orchestrator | 2026-01-01 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:20.166117 | orchestrator | 2026-01-01 01:16:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:20.167695 | orchestrator | 2026-01-01 01:16:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:20.167765 | orchestrator | 2026-01-01 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:23.219104 | orchestrator | 2026-01-01 01:16:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:23.220243 | orchestrator | 2026-01-01 01:16:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:23.220269 | orchestrator | 2026-01-01 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:26.268157 | orchestrator | 2026-01-01 01:16:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:26.269478 | orchestrator | 2026-01-01 01:16:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:26.269510 | orchestrator | 2026-01-01 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:29.319512 | orchestrator | 2026-01-01 01:16:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:29.322310 | orchestrator | 2026-01-01 01:16:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:29.322423 | orchestrator | 2026-01-01 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:32.367784 | orchestrator | 2026-01-01 01:16:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:32.370139 | orchestrator | 2026-01-01 01:16:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:32.370266 | orchestrator | 2026-01-01 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:35.416021 | orchestrator | 2026-01-01 01:16:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:35.418920 | orchestrator | 2026-01-01 01:16:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:35.418981 | orchestrator | 2026-01-01 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:38.467699 | orchestrator | 2026-01-01 01:16:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:38.470943 | orchestrator | 2026-01-01 01:16:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:38.470977 | orchestrator | 2026-01-01 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:41.521981 | orchestrator | 2026-01-01 01:16:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:41.524858 | orchestrator | 2026-01-01 01:16:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:41.524950 | orchestrator | 2026-01-01 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:44.578885 | orchestrator | 2026-01-01 01:16:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:44.581993 | orchestrator | 2026-01-01 01:16:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:44.582194 | orchestrator | 2026-01-01 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:47.636035 | orchestrator | 2026-01-01 01:16:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:47.637635 | orchestrator | 2026-01-01 01:16:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:47.637831 | orchestrator | 2026-01-01 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:50.695705 | orchestrator | 2026-01-01 01:16:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:50.697405 | orchestrator | 2026-01-01 01:16:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:50.697435 | orchestrator | 2026-01-01 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:53.742518 | orchestrator | 2026-01-01 01:16:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:53.743447 | orchestrator | 2026-01-01 01:16:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:53.743502 | orchestrator | 2026-01-01 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:56.792766 | orchestrator | 2026-01-01 01:16:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:56.795042 | orchestrator | 2026-01-01 01:16:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:56.795122 | orchestrator | 2026-01-01 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:16:59.851277 | orchestrator | 2026-01-01 01:16:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:16:59.854896 | orchestrator | 2026-01-01 01:16:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:16:59.855080 | orchestrator | 2026-01-01 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:02.904324 | orchestrator | 2026-01-01 01:17:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:02.905690 | orchestrator | 2026-01-01 01:17:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:02.905745 | orchestrator | 2026-01-01 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:05.954765 | orchestrator | 2026-01-01 01:17:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:05.956572 | orchestrator | 2026-01-01 01:17:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:05.956642 | orchestrator | 2026-01-01 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:09.005798 | orchestrator | 2026-01-01 01:17:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:09.007634 | orchestrator | 2026-01-01 01:17:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:09.007697 | orchestrator | 2026-01-01 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:12.063290 | orchestrator | 2026-01-01 01:17:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:12.064948 | orchestrator | 2026-01-01 01:17:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:12.065099 | orchestrator | 2026-01-01 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:15.119643 | orchestrator | 2026-01-01 01:17:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:15.121747 | orchestrator | 2026-01-01 01:17:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:15.121947 | orchestrator | 2026-01-01 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:18.173245 | orchestrator | 2026-01-01 01:17:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:18.175573 | orchestrator | 2026-01-01 01:17:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:18.175806 | orchestrator | 2026-01-01 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:21.227057 | orchestrator | 2026-01-01 01:17:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:21.227887 | orchestrator | 2026-01-01 01:17:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:21.227919 | orchestrator | 2026-01-01 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:24.278739 | orchestrator | 2026-01-01 01:17:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:24.280443 | orchestrator | 2026-01-01 01:17:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:24.280483 | orchestrator | 2026-01-01 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:27.331651 | orchestrator | 2026-01-01 01:17:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:27.333410 | orchestrator | 2026-01-01 01:17:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:27.333451 | orchestrator | 2026-01-01 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:30.379521 | orchestrator | 2026-01-01 01:17:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:30.380697 | orchestrator | 2026-01-01 01:17:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:30.380762 | orchestrator | 2026-01-01 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:33.425192 | orchestrator | 2026-01-01 01:17:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:33.426594 | orchestrator | 2026-01-01 01:17:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:33.426862 | orchestrator | 2026-01-01 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:36.475719 | orchestrator | 2026-01-01 01:17:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:36.477136 | orchestrator | 2026-01-01 01:17:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:36.477186 | orchestrator | 2026-01-01 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:39.523853 | orchestrator | 2026-01-01 01:17:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:39.525665 | orchestrator | 2026-01-01 01:17:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:39.525695 | orchestrator | 2026-01-01 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:42.575956 | orchestrator | 2026-01-01 01:17:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:42.577765 | orchestrator | 2026-01-01 01:17:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:42.577818 | orchestrator | 2026-01-01 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:45.623092 | orchestrator | 2026-01-01 01:17:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:45.624726 | orchestrator | 2026-01-01 01:17:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:45.624758 | orchestrator | 2026-01-01 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:48.675631 | orchestrator | 2026-01-01 01:17:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:48.677162 | orchestrator | 2026-01-01 01:17:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:48.677223 | orchestrator | 2026-01-01 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:51.736599 | orchestrator | 2026-01-01 01:17:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:51.791660 | orchestrator | 2026-01-01 01:17:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:51.791724 | orchestrator | 2026-01-01 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:54.791041 | orchestrator | 2026-01-01 01:17:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:54.793389 | orchestrator | 2026-01-01 01:17:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:54.793538 | orchestrator | 2026-01-01 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:17:57.845601 | orchestrator | 2026-01-01 01:17:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:17:57.847396 | orchestrator | 2026-01-01 01:17:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:17:57.847444 | orchestrator | 2026-01-01 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:00.891895 | orchestrator | 2026-01-01 01:18:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:00.892825 | orchestrator | 2026-01-01 01:18:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:00.892879 | orchestrator | 2026-01-01 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:03.947722 | orchestrator | 2026-01-01 01:18:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:03.951762 | orchestrator | 2026-01-01 01:18:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:03.951846 | orchestrator | 2026-01-01 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:06.994779 | orchestrator | 2026-01-01 01:18:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:06.996568 | orchestrator | 2026-01-01 01:18:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:06.996600 | orchestrator | 2026-01-01 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:10.040411 | orchestrator | 2026-01-01 01:18:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:10.041884 | orchestrator | 2026-01-01 01:18:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:10.041918 | orchestrator | 2026-01-01 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:13.095818 | orchestrator | 2026-01-01 01:18:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:13.097593 | orchestrator | 2026-01-01 01:18:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:13.097641 | orchestrator | 2026-01-01 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:16.151042 | orchestrator | 2026-01-01 01:18:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:16.152370 | orchestrator | 2026-01-01 01:18:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:16.152670 | orchestrator | 2026-01-01 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:19.209265 | orchestrator | 2026-01-01 01:18:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:19.210926 | orchestrator | 2026-01-01 01:18:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:19.210967 | orchestrator | 2026-01-01 01:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:22.265375 | orchestrator | 2026-01-01 01:18:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:22.267208 | orchestrator | 2026-01-01 01:18:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:22.267262 | orchestrator | 2026-01-01 01:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:25.317853 | orchestrator | 2026-01-01 01:18:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:25.319944 | orchestrator | 2026-01-01 01:18:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:25.319999 | orchestrator | 2026-01-01 01:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:28.369586 | orchestrator | 2026-01-01 01:18:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:28.370782 | orchestrator | 2026-01-01 01:18:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:28.370838 | orchestrator | 2026-01-01 01:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:31.417439 | orchestrator | 2026-01-01 01:18:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:31.418802 | orchestrator | 2026-01-01 01:18:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:31.418859 | orchestrator | 2026-01-01 01:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:34.472580 | orchestrator | 2026-01-01 01:18:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:34.475051 | orchestrator | 2026-01-01 01:18:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:34.475207 | orchestrator | 2026-01-01 01:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:37.524632 | orchestrator | 2026-01-01 01:18:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:37.526729 | orchestrator | 2026-01-01 01:18:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:37.526787 | orchestrator | 2026-01-01 01:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:40.573516 | orchestrator | 2026-01-01 01:18:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:40.575771 | orchestrator | 2026-01-01 01:18:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:40.575845 | orchestrator | 2026-01-01 01:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:43.624963 | orchestrator | 2026-01-01 01:18:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:43.626204 | orchestrator | 2026-01-01 01:18:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:43.626292 | orchestrator | 2026-01-01 01:18:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:46.675012 | orchestrator | 2026-01-01 01:18:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:46.676245 | orchestrator | 2026-01-01 01:18:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:46.676438 | orchestrator | 2026-01-01 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:49.724473 | orchestrator | 2026-01-01 01:18:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:49.725742 | orchestrator | 2026-01-01 01:18:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:49.725776 | orchestrator | 2026-01-01 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:52.767590 | orchestrator | 2026-01-01 01:18:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:52.767772 | orchestrator | 2026-01-01 01:18:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:52.767791 | orchestrator | 2026-01-01 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:55.817349 | orchestrator | 2026-01-01 01:18:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:55.818342 | orchestrator | 2026-01-01 01:18:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:55.818420 | orchestrator | 2026-01-01 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:18:58.868492 | orchestrator | 2026-01-01 01:18:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:18:58.869831 | orchestrator | 2026-01-01 01:18:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:18:58.869866 | orchestrator | 2026-01-01 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:01.918273 | orchestrator | 2026-01-01 01:19:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:01.920112 | orchestrator | 2026-01-01 01:19:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:01.920156 | orchestrator | 2026-01-01 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:04.969090 | orchestrator | 2026-01-01 01:19:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:04.971283 | orchestrator | 2026-01-01 01:19:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:04.971321 | orchestrator | 2026-01-01 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:08.018256 | orchestrator | 2026-01-01 01:19:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:08.019888 | orchestrator | 2026-01-01 01:19:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:08.019952 | orchestrator | 2026-01-01 01:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:11.072004 | orchestrator | 2026-01-01 01:19:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:11.073713 | orchestrator | 2026-01-01 01:19:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:11.073755 | orchestrator | 2026-01-01 01:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:14.119006 | orchestrator | 2026-01-01 01:19:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:14.121615 | orchestrator | 2026-01-01 01:19:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:14.121738 | orchestrator | 2026-01-01 01:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:17.167018 | orchestrator | 2026-01-01 01:19:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:17.169237 | orchestrator | 2026-01-01 01:19:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:17.169317 | orchestrator | 2026-01-01 01:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:20.218296 | orchestrator | 2026-01-01 01:19:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:20.220151 | orchestrator | 2026-01-01 01:19:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:20.220225 | orchestrator | 2026-01-01 01:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:23.271140 | orchestrator | 2026-01-01 01:19:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:23.272426 | orchestrator | 2026-01-01 01:19:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:23.272479 | orchestrator | 2026-01-01 01:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:26.315367 | orchestrator | 2026-01-01 01:19:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:26.318881 | orchestrator | 2026-01-01 01:19:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:26.318967 | orchestrator | 2026-01-01 01:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:29.359733 | orchestrator | 2026-01-01 01:19:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:29.361295 | orchestrator | 2026-01-01 01:19:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:29.361353 | orchestrator | 2026-01-01 01:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:32.404564 | orchestrator | 2026-01-01 01:19:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:32.407140 | orchestrator | 2026-01-01 01:19:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:32.407201 | orchestrator | 2026-01-01 01:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:35.460235 | orchestrator | 2026-01-01 01:19:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:35.462181 | orchestrator | 2026-01-01 01:19:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:35.462235 | orchestrator | 2026-01-01 01:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:38.517246 | orchestrator | 2026-01-01 01:19:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:38.519124 | orchestrator | 2026-01-01 01:19:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:38.519433 | orchestrator | 2026-01-01 01:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:41.570996 | orchestrator | 2026-01-01 01:19:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:41.573518 | orchestrator | 2026-01-01 01:19:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:41.573616 | orchestrator | 2026-01-01 01:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:44.619271 | orchestrator | 2026-01-01 01:19:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:44.620878 | orchestrator | 2026-01-01 01:19:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:44.622118 | orchestrator | 2026-01-01 01:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:47.669714 | orchestrator | 2026-01-01 01:19:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:47.671222 | orchestrator | 2026-01-01 01:19:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:47.671507 | orchestrator | 2026-01-01 01:19:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:50.720851 | orchestrator | 2026-01-01 01:19:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:50.723806 | orchestrator | 2026-01-01 01:19:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:50.723855 | orchestrator | 2026-01-01 01:19:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:53.772787 | orchestrator | 2026-01-01 01:19:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:53.773992 | orchestrator | 2026-01-01 01:19:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:53.774072 | orchestrator | 2026-01-01 01:19:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:56.814756 | orchestrator | 2026-01-01 01:19:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:56.816222 | orchestrator | 2026-01-01 01:19:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:56.816295 | orchestrator | 2026-01-01 01:19:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:19:59.860034 | orchestrator | 2026-01-01 01:19:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:19:59.860631 | orchestrator | 2026-01-01 01:19:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:19:59.860876 | orchestrator | 2026-01-01 01:19:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:02.902236 | orchestrator | 2026-01-01 01:20:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:02.903578 | orchestrator | 2026-01-01 01:20:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:02.903653 | orchestrator | 2026-01-01 01:20:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:05.954178 | orchestrator | 2026-01-01 01:20:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:05.956898 | orchestrator | 2026-01-01 01:20:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:05.957627 | orchestrator | 2026-01-01 01:20:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:09.007325 | orchestrator | 2026-01-01 01:20:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:09.008468 | orchestrator | 2026-01-01 01:20:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:09.008540 | orchestrator | 2026-01-01 01:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:12.056279 | orchestrator | 2026-01-01 01:20:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:12.057085 | orchestrator | 2026-01-01 01:20:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:12.057118 | orchestrator | 2026-01-01 01:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:15.110763 | orchestrator | 2026-01-01 01:20:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:15.113065 | orchestrator | 2026-01-01 01:20:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:15.113205 | orchestrator | 2026-01-01 01:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:18.162102 | orchestrator | 2026-01-01 01:20:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:18.164846 | orchestrator | 2026-01-01 01:20:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:18.164892 | orchestrator | 2026-01-01 01:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:21.224981 | orchestrator | 2026-01-01 01:20:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:21.226795 | orchestrator | 2026-01-01 01:20:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:21.226834 | orchestrator | 2026-01-01 01:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:24.275798 | orchestrator | 2026-01-01 01:20:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:24.276450 | orchestrator | 2026-01-01 01:20:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:24.276677 | orchestrator | 2026-01-01 01:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:27.323340 | orchestrator | 2026-01-01 01:20:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:27.325389 | orchestrator | 2026-01-01 01:20:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:27.325523 | orchestrator | 2026-01-01 01:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:30.379481 | orchestrator | 2026-01-01 01:20:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:30.382355 | orchestrator | 2026-01-01 01:20:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:30.383721 | orchestrator | 2026-01-01 01:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:33.422343 | orchestrator | 2026-01-01 01:20:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:33.423912 | orchestrator | 2026-01-01 01:20:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:33.423967 | orchestrator | 2026-01-01 01:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:36.468388 | orchestrator | 2026-01-01 01:20:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:36.470211 | orchestrator | 2026-01-01 01:20:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:36.470303 | orchestrator | 2026-01-01 01:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:39.517112 | orchestrator | 2026-01-01 01:20:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:39.519288 | orchestrator | 2026-01-01 01:20:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:39.519360 | orchestrator | 2026-01-01 01:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:42.565515 | orchestrator | 2026-01-01 01:20:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:42.568828 | orchestrator | 2026-01-01 01:20:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:42.568864 | orchestrator | 2026-01-01 01:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:45.614864 | orchestrator | 2026-01-01 01:20:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:45.616107 | orchestrator | 2026-01-01 01:20:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:45.616328 | orchestrator | 2026-01-01 01:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:48.661817 | orchestrator | 2026-01-01 01:20:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:48.663110 | orchestrator | 2026-01-01 01:20:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:48.663394 | orchestrator | 2026-01-01 01:20:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:51.718096 | orchestrator | 2026-01-01 01:20:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:51.719748 | orchestrator | 2026-01-01 01:20:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:51.719853 | orchestrator | 2026-01-01 01:20:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:54.766824 | orchestrator | 2026-01-01 01:20:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:54.769805 | orchestrator | 2026-01-01 01:20:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:54.769868 | orchestrator | 2026-01-01 01:20:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:20:57.820715 | orchestrator | 2026-01-01 01:20:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:20:57.822838 | orchestrator | 2026-01-01 01:20:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:20:57.822874 | orchestrator | 2026-01-01 01:20:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:00.871798 | orchestrator | 2026-01-01 01:21:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:00.873744 | orchestrator | 2026-01-01 01:21:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:00.873767 | orchestrator | 2026-01-01 01:21:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:03.914505 | orchestrator | 2026-01-01 01:21:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:03.917793 | orchestrator | 2026-01-01 01:21:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:03.917831 | orchestrator | 2026-01-01 01:21:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:06.966778 | orchestrator | 2026-01-01 01:21:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:06.968104 | orchestrator | 2026-01-01 01:21:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:06.968139 | orchestrator | 2026-01-01 01:21:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:10.011785 | orchestrator | 2026-01-01 01:21:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:10.015740 | orchestrator | 2026-01-01 01:21:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:10.015808 | orchestrator | 2026-01-01 01:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:13.056124 | orchestrator | 2026-01-01 01:21:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:13.057074 | orchestrator | 2026-01-01 01:21:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:13.057091 | orchestrator | 2026-01-01 01:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:16.104741 | orchestrator | 2026-01-01 01:21:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:16.106991 | orchestrator | 2026-01-01 01:21:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:16.107007 | orchestrator | 2026-01-01 01:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:19.157750 | orchestrator | 2026-01-01 01:21:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:19.159392 | orchestrator | 2026-01-01 01:21:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:19.159410 | orchestrator | 2026-01-01 01:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:22.211226 | orchestrator | 2026-01-01 01:21:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:22.213360 | orchestrator | 2026-01-01 01:21:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:22.213374 | orchestrator | 2026-01-01 01:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:25.262469 | orchestrator | 2026-01-01 01:21:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:25.265912 | orchestrator | 2026-01-01 01:21:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:25.266248 | orchestrator | 2026-01-01 01:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:28.311430 | orchestrator | 2026-01-01 01:21:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:28.313235 | orchestrator | 2026-01-01 01:21:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:28.313344 | orchestrator | 2026-01-01 01:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:31.359801 | orchestrator | 2026-01-01 01:21:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:31.360515 | orchestrator | 2026-01-01 01:21:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:31.360612 | orchestrator | 2026-01-01 01:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:34.412718 | orchestrator | 2026-01-01 01:21:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:34.415345 | orchestrator | 2026-01-01 01:21:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:34.415396 | orchestrator | 2026-01-01 01:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:37.462995 | orchestrator | 2026-01-01 01:21:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:37.464878 | orchestrator | 2026-01-01 01:21:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:37.464918 | orchestrator | 2026-01-01 01:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:40.522600 | orchestrator | 2026-01-01 01:21:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:40.525203 | orchestrator | 2026-01-01 01:21:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:40.525256 | orchestrator | 2026-01-01 01:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:43.578279 | orchestrator | 2026-01-01 01:21:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:43.579343 | orchestrator | 2026-01-01 01:21:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:43.579418 | orchestrator | 2026-01-01 01:21:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:46.623175 | orchestrator | 2026-01-01 01:21:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:46.624201 | orchestrator | 2026-01-01 01:21:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:46.624257 | orchestrator | 2026-01-01 01:21:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:49.668756 | orchestrator | 2026-01-01 01:21:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:49.671195 | orchestrator | 2026-01-01 01:21:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:49.671268 | orchestrator | 2026-01-01 01:21:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:52.726957 | orchestrator | 2026-01-01 01:21:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:52.727918 | orchestrator | 2026-01-01 01:21:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:52.728001 | orchestrator | 2026-01-01 01:21:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:55.774354 | orchestrator | 2026-01-01 01:21:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:55.775818 | orchestrator | 2026-01-01 01:21:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:55.775843 | orchestrator | 2026-01-01 01:21:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:21:58.827918 | orchestrator | 2026-01-01 01:21:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:21:58.829309 | orchestrator | 2026-01-01 01:21:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:21:58.829380 | orchestrator | 2026-01-01 01:21:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:01.882228 | orchestrator | 2026-01-01 01:22:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:01.883560 | orchestrator | 2026-01-01 01:22:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:01.883592 | orchestrator | 2026-01-01 01:22:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:04.928537 | orchestrator | 2026-01-01 01:22:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:04.930272 | orchestrator | 2026-01-01 01:22:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:04.930307 | orchestrator | 2026-01-01 01:22:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:07.978768 | orchestrator | 2026-01-01 01:22:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:07.980328 | orchestrator | 2026-01-01 01:22:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:07.980365 | orchestrator | 2026-01-01 01:22:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:11.031428 | orchestrator | 2026-01-01 01:22:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:11.033417 | orchestrator | 2026-01-01 01:22:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:11.033501 | orchestrator | 2026-01-01 01:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:14.079795 | orchestrator | 2026-01-01 01:22:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:14.082076 | orchestrator | 2026-01-01 01:22:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:14.082135 | orchestrator | 2026-01-01 01:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:17.121746 | orchestrator | 2026-01-01 01:22:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:17.122905 | orchestrator | 2026-01-01 01:22:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:17.122978 | orchestrator | 2026-01-01 01:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:20.178778 | orchestrator | 2026-01-01 01:22:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:20.180468 | orchestrator | 2026-01-01 01:22:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:20.180498 | orchestrator | 2026-01-01 01:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:23.230314 | orchestrator | 2026-01-01 01:22:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:23.231507 | orchestrator | 2026-01-01 01:22:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:23.231545 | orchestrator | 2026-01-01 01:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:26.281547 | orchestrator | 2026-01-01 01:22:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:26.284440 | orchestrator | 2026-01-01 01:22:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:26.284507 | orchestrator | 2026-01-01 01:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:29.331051 | orchestrator | 2026-01-01 01:22:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:29.331288 | orchestrator | 2026-01-01 01:22:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:29.331308 | orchestrator | 2026-01-01 01:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:32.383176 | orchestrator | 2026-01-01 01:22:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:32.385664 | orchestrator | 2026-01-01 01:22:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:32.386142 | orchestrator | 2026-01-01 01:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:35.431570 | orchestrator | 2026-01-01 01:22:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:35.434351 | orchestrator | 2026-01-01 01:22:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:35.434607 | orchestrator | 2026-01-01 01:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:38.478259 | orchestrator | 2026-01-01 01:22:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:38.480313 | orchestrator | 2026-01-01 01:22:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:38.480371 | orchestrator | 2026-01-01 01:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:41.523729 | orchestrator | 2026-01-01 01:22:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:41.525764 | orchestrator | 2026-01-01 01:22:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:41.525886 | orchestrator | 2026-01-01 01:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:44.574077 | orchestrator | 2026-01-01 01:22:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:44.575881 | orchestrator | 2026-01-01 01:22:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:44.575929 | orchestrator | 2026-01-01 01:22:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:47.629472 | orchestrator | 2026-01-01 01:22:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:47.631678 | orchestrator | 2026-01-01 01:22:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:47.632226 | orchestrator | 2026-01-01 01:22:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:50.686661 | orchestrator | 2026-01-01 01:22:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:50.689422 | orchestrator | 2026-01-01 01:22:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:50.689504 | orchestrator | 2026-01-01 01:22:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:53.731892 | orchestrator | 2026-01-01 01:22:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:53.732948 | orchestrator | 2026-01-01 01:22:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:53.733018 | orchestrator | 2026-01-01 01:22:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:56.782009 | orchestrator | 2026-01-01 01:22:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:56.783963 | orchestrator | 2026-01-01 01:22:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:56.784008 | orchestrator | 2026-01-01 01:22:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:22:59.837539 | orchestrator | 2026-01-01 01:22:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:22:59.839339 | orchestrator | 2026-01-01 01:22:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:22:59.839478 | orchestrator | 2026-01-01 01:22:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:02.883260 | orchestrator | 2026-01-01 01:23:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:02.884983 | orchestrator | 2026-01-01 01:23:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:02.885039 | orchestrator | 2026-01-01 01:23:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:05.934460 | orchestrator | 2026-01-01 01:23:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:05.936040 | orchestrator | 2026-01-01 01:23:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:05.936093 | orchestrator | 2026-01-01 01:23:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:08.991580 | orchestrator | 2026-01-01 01:23:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:08.993146 | orchestrator | 2026-01-01 01:23:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:08.993217 | orchestrator | 2026-01-01 01:23:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:12.032844 | orchestrator | 2026-01-01 01:23:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:12.034852 | orchestrator | 2026-01-01 01:23:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:12.034892 | orchestrator | 2026-01-01 01:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:15.080017 | orchestrator | 2026-01-01 01:23:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:15.080819 | orchestrator | 2026-01-01 01:23:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:15.080866 | orchestrator | 2026-01-01 01:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:18.132377 | orchestrator | 2026-01-01 01:23:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:18.134392 | orchestrator | 2026-01-01 01:23:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:18.134441 | orchestrator | 2026-01-01 01:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:21.185264 | orchestrator | 2026-01-01 01:23:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:21.189502 | orchestrator | 2026-01-01 01:23:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:21.189543 | orchestrator | 2026-01-01 01:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:24.249843 | orchestrator | 2026-01-01 01:23:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:24.251791 | orchestrator | 2026-01-01 01:23:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:24.252173 | orchestrator | 2026-01-01 01:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:27.301224 | orchestrator | 2026-01-01 01:23:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:27.302878 | orchestrator | 2026-01-01 01:23:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:27.302937 | orchestrator | 2026-01-01 01:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:30.351338 | orchestrator | 2026-01-01 01:23:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:30.352898 | orchestrator | 2026-01-01 01:23:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:30.352932 | orchestrator | 2026-01-01 01:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:33.406643 | orchestrator | 2026-01-01 01:23:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:33.407988 | orchestrator | 2026-01-01 01:23:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:33.408026 | orchestrator | 2026-01-01 01:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:36.449400 | orchestrator | 2026-01-01 01:23:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:36.451486 | orchestrator | 2026-01-01 01:23:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:36.451970 | orchestrator | 2026-01-01 01:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:39.497529 | orchestrator | 2026-01-01 01:23:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:39.499927 | orchestrator | 2026-01-01 01:23:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:39.500003 | orchestrator | 2026-01-01 01:23:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:42.548671 | orchestrator | 2026-01-01 01:23:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:42.550118 | orchestrator | 2026-01-01 01:23:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:42.550258 | orchestrator | 2026-01-01 01:23:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:45.600906 | orchestrator | 2026-01-01 01:23:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:45.603013 | orchestrator | 2026-01-01 01:23:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:45.603191 | orchestrator | 2026-01-01 01:23:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:48.652197 | orchestrator | 2026-01-01 01:23:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:48.655707 | orchestrator | 2026-01-01 01:23:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:48.655763 | orchestrator | 2026-01-01 01:23:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:51.705951 | orchestrator | 2026-01-01 01:23:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:51.707947 | orchestrator | 2026-01-01 01:23:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:51.708007 | orchestrator | 2026-01-01 01:23:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:54.757097 | orchestrator | 2026-01-01 01:23:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:54.759842 | orchestrator | 2026-01-01 01:23:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:54.759945 | orchestrator | 2026-01-01 01:23:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:23:57.807355 | orchestrator | 2026-01-01 01:23:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:23:57.809125 | orchestrator | 2026-01-01 01:23:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:23:57.809159 | orchestrator | 2026-01-01 01:23:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:00.852279 | orchestrator | 2026-01-01 01:24:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:00.853672 | orchestrator | 2026-01-01 01:24:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:00.853864 | orchestrator | 2026-01-01 01:24:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:03.897646 | orchestrator | 2026-01-01 01:24:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:03.899750 | orchestrator | 2026-01-01 01:24:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:03.899932 | orchestrator | 2026-01-01 01:24:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:06.951377 | orchestrator | 2026-01-01 01:24:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:06.953362 | orchestrator | 2026-01-01 01:24:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:06.953523 | orchestrator | 2026-01-01 01:24:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:10.012705 | orchestrator | 2026-01-01 01:24:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:10.015430 | orchestrator | 2026-01-01 01:24:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:10.015742 | orchestrator | 2026-01-01 01:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:13.064217 | orchestrator | 2026-01-01 01:24:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:13.065443 | orchestrator | 2026-01-01 01:24:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:13.065460 | orchestrator | 2026-01-01 01:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:16.119122 | orchestrator | 2026-01-01 01:24:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:16.120447 | orchestrator | 2026-01-01 01:24:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:16.120518 | orchestrator | 2026-01-01 01:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:19.170254 | orchestrator | 2026-01-01 01:24:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:19.172262 | orchestrator | 2026-01-01 01:24:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:19.172311 | orchestrator | 2026-01-01 01:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:22.220606 | orchestrator | 2026-01-01 01:24:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:22.221730 | orchestrator | 2026-01-01 01:24:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:22.221839 | orchestrator | 2026-01-01 01:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:25.266863 | orchestrator | 2026-01-01 01:24:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:25.269375 | orchestrator | 2026-01-01 01:24:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:25.269456 | orchestrator | 2026-01-01 01:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:28.314684 | orchestrator | 2026-01-01 01:24:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:28.315790 | orchestrator | 2026-01-01 01:24:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:28.315930 | orchestrator | 2026-01-01 01:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:31.365914 | orchestrator | 2026-01-01 01:24:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:31.367555 | orchestrator | 2026-01-01 01:24:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:31.367702 | orchestrator | 2026-01-01 01:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:34.411561 | orchestrator | 2026-01-01 01:24:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:34.413162 | orchestrator | 2026-01-01 01:24:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:34.413200 | orchestrator | 2026-01-01 01:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:37.459969 | orchestrator | 2026-01-01 01:24:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:37.461843 | orchestrator | 2026-01-01 01:24:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:37.461901 | orchestrator | 2026-01-01 01:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:40.513756 | orchestrator | 2026-01-01 01:24:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:40.516270 | orchestrator | 2026-01-01 01:24:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:40.516475 | orchestrator | 2026-01-01 01:24:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:43.571363 | orchestrator | 2026-01-01 01:24:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:43.573186 | orchestrator | 2026-01-01 01:24:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:43.573231 | orchestrator | 2026-01-01 01:24:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:46.626484 | orchestrator | 2026-01-01 01:24:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:46.628213 | orchestrator | 2026-01-01 01:24:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:46.628304 | orchestrator | 2026-01-01 01:24:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:49.677511 | orchestrator | 2026-01-01 01:24:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:49.678131 | orchestrator | 2026-01-01 01:24:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:49.678160 | orchestrator | 2026-01-01 01:24:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:52.732365 | orchestrator | 2026-01-01 01:24:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:52.736177 | orchestrator | 2026-01-01 01:24:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:52.736267 | orchestrator | 2026-01-01 01:24:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:55.788732 | orchestrator | 2026-01-01 01:24:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:55.792839 | orchestrator | 2026-01-01 01:24:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:55.792883 | orchestrator | 2026-01-01 01:24:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:24:58.847188 | orchestrator | 2026-01-01 01:24:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:24:58.848411 | orchestrator | 2026-01-01 01:24:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:24:58.848439 | orchestrator | 2026-01-01 01:24:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:01.904293 | orchestrator | 2026-01-01 01:25:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:01.907348 | orchestrator | 2026-01-01 01:25:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:01.907464 | orchestrator | 2026-01-01 01:25:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:04.961060 | orchestrator | 2026-01-01 01:25:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:04.962740 | orchestrator | 2026-01-01 01:25:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:04.962916 | orchestrator | 2026-01-01 01:25:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:08.015800 | orchestrator | 2026-01-01 01:25:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:08.017075 | orchestrator | 2026-01-01 01:25:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:08.017109 | orchestrator | 2026-01-01 01:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:11.070218 | orchestrator | 2026-01-01 01:25:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:11.072306 | orchestrator | 2026-01-01 01:25:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:11.072445 | orchestrator | 2026-01-01 01:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:14.126138 | orchestrator | 2026-01-01 01:25:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:14.128582 | orchestrator | 2026-01-01 01:25:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:14.128751 | orchestrator | 2026-01-01 01:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:17.175061 | orchestrator | 2026-01-01 01:25:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:17.179443 | orchestrator | 2026-01-01 01:25:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:17.179508 | orchestrator | 2026-01-01 01:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:20.227954 | orchestrator | 2026-01-01 01:25:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:20.229289 | orchestrator | 2026-01-01 01:25:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:20.229363 | orchestrator | 2026-01-01 01:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:23.279323 | orchestrator | 2026-01-01 01:25:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:23.280792 | orchestrator | 2026-01-01 01:25:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:23.280941 | orchestrator | 2026-01-01 01:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:26.325415 | orchestrator | 2026-01-01 01:25:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:26.328383 | orchestrator | 2026-01-01 01:25:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:26.328418 | orchestrator | 2026-01-01 01:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:29.370776 | orchestrator | 2026-01-01 01:25:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:29.371182 | orchestrator | 2026-01-01 01:25:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:29.371212 | orchestrator | 2026-01-01 01:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:32.424667 | orchestrator | 2026-01-01 01:25:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:32.426435 | orchestrator | 2026-01-01 01:25:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:32.426615 | orchestrator | 2026-01-01 01:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:35.474442 | orchestrator | 2026-01-01 01:25:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:35.475952 | orchestrator | 2026-01-01 01:25:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:35.475975 | orchestrator | 2026-01-01 01:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:38.531693 | orchestrator | 2026-01-01 01:25:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:38.533661 | orchestrator | 2026-01-01 01:25:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:38.533705 | orchestrator | 2026-01-01 01:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:41.586469 | orchestrator | 2026-01-01 01:25:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:41.588602 | orchestrator | 2026-01-01 01:25:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:41.588647 | orchestrator | 2026-01-01 01:25:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:44.638612 | orchestrator | 2026-01-01 01:25:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:44.641391 | orchestrator | 2026-01-01 01:25:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:44.641701 | orchestrator | 2026-01-01 01:25:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:47.694132 | orchestrator | 2026-01-01 01:25:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:47.695412 | orchestrator | 2026-01-01 01:25:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:47.695452 | orchestrator | 2026-01-01 01:25:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:50.743183 | orchestrator | 2026-01-01 01:25:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:50.745532 | orchestrator | 2026-01-01 01:25:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:50.745592 | orchestrator | 2026-01-01 01:25:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:53.788605 | orchestrator | 2026-01-01 01:25:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:53.789725 | orchestrator | 2026-01-01 01:25:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:53.789745 | orchestrator | 2026-01-01 01:25:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:56.836263 | orchestrator | 2026-01-01 01:25:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:56.837758 | orchestrator | 2026-01-01 01:25:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:56.837930 | orchestrator | 2026-01-01 01:25:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:25:59.889162 | orchestrator | 2026-01-01 01:25:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:25:59.890141 | orchestrator | 2026-01-01 01:25:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:25:59.890223 | orchestrator | 2026-01-01 01:25:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:02.936394 | orchestrator | 2026-01-01 01:26:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:02.937648 | orchestrator | 2026-01-01 01:26:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:02.937682 | orchestrator | 2026-01-01 01:26:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:05.980530 | orchestrator | 2026-01-01 01:26:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:05.982821 | orchestrator | 2026-01-01 01:26:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:05.982959 | orchestrator | 2026-01-01 01:26:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:09.035470 | orchestrator | 2026-01-01 01:26:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:09.038611 | orchestrator | 2026-01-01 01:26:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:09.038667 | orchestrator | 2026-01-01 01:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:12.086628 | orchestrator | 2026-01-01 01:26:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:12.089508 | orchestrator | 2026-01-01 01:26:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:12.089556 | orchestrator | 2026-01-01 01:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:15.129413 | orchestrator | 2026-01-01 01:26:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:15.130668 | orchestrator | 2026-01-01 01:26:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:15.130903 | orchestrator | 2026-01-01 01:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:18.182132 | orchestrator | 2026-01-01 01:26:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:18.183747 | orchestrator | 2026-01-01 01:26:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:18.183794 | orchestrator | 2026-01-01 01:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:21.236340 | orchestrator | 2026-01-01 01:26:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:21.238660 | orchestrator | 2026-01-01 01:26:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:21.238716 | orchestrator | 2026-01-01 01:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:24.290459 | orchestrator | 2026-01-01 01:26:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:24.291885 | orchestrator | 2026-01-01 01:26:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:24.291920 | orchestrator | 2026-01-01 01:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:27.345247 | orchestrator | 2026-01-01 01:26:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:27.347632 | orchestrator | 2026-01-01 01:26:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:27.347704 | orchestrator | 2026-01-01 01:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:30.400097 | orchestrator | 2026-01-01 01:26:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:30.401631 | orchestrator | 2026-01-01 01:26:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:30.401680 | orchestrator | 2026-01-01 01:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:33.459191 | orchestrator | 2026-01-01 01:26:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:33.461868 | orchestrator | 2026-01-01 01:26:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:33.461905 | orchestrator | 2026-01-01 01:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:36.510764 | orchestrator | 2026-01-01 01:26:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:36.512556 | orchestrator | 2026-01-01 01:26:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:36.512833 | orchestrator | 2026-01-01 01:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:39.568314 | orchestrator | 2026-01-01 01:26:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:39.570334 | orchestrator | 2026-01-01 01:26:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:39.570363 | orchestrator | 2026-01-01 01:26:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:42.621019 | orchestrator | 2026-01-01 01:26:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:42.625321 | orchestrator | 2026-01-01 01:26:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:42.625425 | orchestrator | 2026-01-01 01:26:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:45.676050 | orchestrator | 2026-01-01 01:26:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:45.679057 | orchestrator | 2026-01-01 01:26:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:45.679148 | orchestrator | 2026-01-01 01:26:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:48.733922 | orchestrator | 2026-01-01 01:26:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:48.735200 | orchestrator | 2026-01-01 01:26:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:48.735246 | orchestrator | 2026-01-01 01:26:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:51.793745 | orchestrator | 2026-01-01 01:26:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:51.796344 | orchestrator | 2026-01-01 01:26:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:51.796631 | orchestrator | 2026-01-01 01:26:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:54.852654 | orchestrator | 2026-01-01 01:26:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:54.854734 | orchestrator | 2026-01-01 01:26:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:54.854820 | orchestrator | 2026-01-01 01:26:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:26:57.914702 | orchestrator | 2026-01-01 01:26:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:26:57.916211 | orchestrator | 2026-01-01 01:26:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:26:57.916280 | orchestrator | 2026-01-01 01:26:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:00.969649 | orchestrator | 2026-01-01 01:27:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:00.971772 | orchestrator | 2026-01-01 01:27:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:00.971826 | orchestrator | 2026-01-01 01:27:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:04.027394 | orchestrator | 2026-01-01 01:27:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:04.028363 | orchestrator | 2026-01-01 01:27:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:04.028383 | orchestrator | 2026-01-01 01:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:07.083386 | orchestrator | 2026-01-01 01:27:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:07.085500 | orchestrator | 2026-01-01 01:27:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:07.085558 | orchestrator | 2026-01-01 01:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:10.141958 | orchestrator | 2026-01-01 01:27:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:10.144377 | orchestrator | 2026-01-01 01:27:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:10.144407 | orchestrator | 2026-01-01 01:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:13.190264 | orchestrator | 2026-01-01 01:27:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:13.192584 | orchestrator | 2026-01-01 01:27:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:13.192680 | orchestrator | 2026-01-01 01:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:16.237226 | orchestrator | 2026-01-01 01:27:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:16.239466 | orchestrator | 2026-01-01 01:27:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:16.239490 | orchestrator | 2026-01-01 01:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:19.274709 | orchestrator | 2026-01-01 01:27:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:19.276978 | orchestrator | 2026-01-01 01:27:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:19.277091 | orchestrator | 2026-01-01 01:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:22.330184 | orchestrator | 2026-01-01 01:27:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:22.331544 | orchestrator | 2026-01-01 01:27:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:22.331602 | orchestrator | 2026-01-01 01:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:25.386815 | orchestrator | 2026-01-01 01:27:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:25.388239 | orchestrator | 2026-01-01 01:27:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:25.389245 | orchestrator | 2026-01-01 01:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:28.437458 | orchestrator | 2026-01-01 01:27:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:28.439667 | orchestrator | 2026-01-01 01:27:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:28.439820 | orchestrator | 2026-01-01 01:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:31.489204 | orchestrator | 2026-01-01 01:27:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:31.490427 | orchestrator | 2026-01-01 01:27:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:31.490833 | orchestrator | 2026-01-01 01:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:34.540787 | orchestrator | 2026-01-01 01:27:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:34.542682 | orchestrator | 2026-01-01 01:27:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:34.543015 | orchestrator | 2026-01-01 01:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:37.591624 | orchestrator | 2026-01-01 01:27:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:37.592216 | orchestrator | 2026-01-01 01:27:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:37.592672 | orchestrator | 2026-01-01 01:27:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:40.644585 | orchestrator | 2026-01-01 01:27:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:40.646290 | orchestrator | 2026-01-01 01:27:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:40.646483 | orchestrator | 2026-01-01 01:27:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:43.689192 | orchestrator | 2026-01-01 01:27:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:43.690324 | orchestrator | 2026-01-01 01:27:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:43.690357 | orchestrator | 2026-01-01 01:27:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:46.743510 | orchestrator | 2026-01-01 01:27:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:46.745514 | orchestrator | 2026-01-01 01:27:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:46.745537 | orchestrator | 2026-01-01 01:27:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:49.796820 | orchestrator | 2026-01-01 01:27:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:49.798391 | orchestrator | 2026-01-01 01:27:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:49.798432 | orchestrator | 2026-01-01 01:27:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:52.845221 | orchestrator | 2026-01-01 01:27:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:52.846249 | orchestrator | 2026-01-01 01:27:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:52.846363 | orchestrator | 2026-01-01 01:27:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:55.897969 | orchestrator | 2026-01-01 01:27:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:55.900026 | orchestrator | 2026-01-01 01:27:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:55.900040 | orchestrator | 2026-01-01 01:27:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:27:58.946651 | orchestrator | 2026-01-01 01:27:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:27:58.948350 | orchestrator | 2026-01-01 01:27:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:27:58.948408 | orchestrator | 2026-01-01 01:27:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:01.992936 | orchestrator | 2026-01-01 01:28:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:01.994800 | orchestrator | 2026-01-01 01:28:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:01.994908 | orchestrator | 2026-01-01 01:28:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:05.039144 | orchestrator | 2026-01-01 01:28:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:05.040907 | orchestrator | 2026-01-01 01:28:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:05.040959 | orchestrator | 2026-01-01 01:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:08.088155 | orchestrator | 2026-01-01 01:28:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:08.089386 | orchestrator | 2026-01-01 01:28:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:08.089416 | orchestrator | 2026-01-01 01:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:11.139509 | orchestrator | 2026-01-01 01:28:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:11.141859 | orchestrator | 2026-01-01 01:28:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:11.141965 | orchestrator | 2026-01-01 01:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:14.193280 | orchestrator | 2026-01-01 01:28:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:14.199608 | orchestrator | 2026-01-01 01:28:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:14.199706 | orchestrator | 2026-01-01 01:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:17.244850 | orchestrator | 2026-01-01 01:28:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:17.246661 | orchestrator | 2026-01-01 01:28:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:17.246683 | orchestrator | 2026-01-01 01:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:20.300157 | orchestrator | 2026-01-01 01:28:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:20.302681 | orchestrator | 2026-01-01 01:28:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:20.302720 | orchestrator | 2026-01-01 01:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:23.352428 | orchestrator | 2026-01-01 01:28:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:23.355425 | orchestrator | 2026-01-01 01:28:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:23.355521 | orchestrator | 2026-01-01 01:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:26.401107 | orchestrator | 2026-01-01 01:28:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:26.403834 | orchestrator | 2026-01-01 01:28:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:26.404235 | orchestrator | 2026-01-01 01:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:29.442113 | orchestrator | 2026-01-01 01:28:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:29.442971 | orchestrator | 2026-01-01 01:28:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:29.443399 | orchestrator | 2026-01-01 01:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:32.500179 | orchestrator | 2026-01-01 01:28:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:32.502567 | orchestrator | 2026-01-01 01:28:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:32.502668 | orchestrator | 2026-01-01 01:28:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:35.554729 | orchestrator | 2026-01-01 01:28:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:35.557450 | orchestrator | 2026-01-01 01:28:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:35.557501 | orchestrator | 2026-01-01 01:28:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:38.612522 | orchestrator | 2026-01-01 01:28:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:38.613841 | orchestrator | 2026-01-01 01:28:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:38.613959 | orchestrator | 2026-01-01 01:28:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:41.662803 | orchestrator | 2026-01-01 01:28:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:41.664205 | orchestrator | 2026-01-01 01:28:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:41.664247 | orchestrator | 2026-01-01 01:28:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:44.712926 | orchestrator | 2026-01-01 01:28:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:44.715166 | orchestrator | 2026-01-01 01:28:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:44.715459 | orchestrator | 2026-01-01 01:28:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:47.763979 | orchestrator | 2026-01-01 01:28:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:47.766442 | orchestrator | 2026-01-01 01:28:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:47.766505 | orchestrator | 2026-01-01 01:28:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:50.817691 | orchestrator | 2026-01-01 01:28:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:50.818879 | orchestrator | 2026-01-01 01:28:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:50.818971 | orchestrator | 2026-01-01 01:28:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:53.878388 | orchestrator | 2026-01-01 01:28:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:53.878638 | orchestrator | 2026-01-01 01:28:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:53.878664 | orchestrator | 2026-01-01 01:28:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:56.925701 | orchestrator | 2026-01-01 01:28:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:56.927806 | orchestrator | 2026-01-01 01:28:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:56.928150 | orchestrator | 2026-01-01 01:28:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:28:59.970437 | orchestrator | 2026-01-01 01:28:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:28:59.973142 | orchestrator | 2026-01-01 01:28:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:28:59.973181 | orchestrator | 2026-01-01 01:28:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:03.022799 | orchestrator | 2026-01-01 01:29:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:03.025475 | orchestrator | 2026-01-01 01:29:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:03.025574 | orchestrator | 2026-01-01 01:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:06.071241 | orchestrator | 2026-01-01 01:29:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:06.075565 | orchestrator | 2026-01-01 01:29:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:06.075648 | orchestrator | 2026-01-01 01:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:09.126211 | orchestrator | 2026-01-01 01:29:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:09.127279 | orchestrator | 2026-01-01 01:29:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:09.127312 | orchestrator | 2026-01-01 01:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:12.173308 | orchestrator | 2026-01-01 01:29:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:12.175250 | orchestrator | 2026-01-01 01:29:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:12.175555 | orchestrator | 2026-01-01 01:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:15.220737 | orchestrator | 2026-01-01 01:29:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:15.221674 | orchestrator | 2026-01-01 01:29:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:15.221741 | orchestrator | 2026-01-01 01:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:18.271869 | orchestrator | 2026-01-01 01:29:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:18.274163 | orchestrator | 2026-01-01 01:29:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:18.274212 | orchestrator | 2026-01-01 01:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:21.322782 | orchestrator | 2026-01-01 01:29:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:21.324314 | orchestrator | 2026-01-01 01:29:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:21.324379 | orchestrator | 2026-01-01 01:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:24.381316 | orchestrator | 2026-01-01 01:29:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:24.382820 | orchestrator | 2026-01-01 01:29:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:24.382996 | orchestrator | 2026-01-01 01:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:27.438658 | orchestrator | 2026-01-01 01:29:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:27.440585 | orchestrator | 2026-01-01 01:29:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:27.440998 | orchestrator | 2026-01-01 01:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:30.494194 | orchestrator | 2026-01-01 01:29:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:30.495971 | orchestrator | 2026-01-01 01:29:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:30.496026 | orchestrator | 2026-01-01 01:29:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:33.545324 | orchestrator | 2026-01-01 01:29:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:33.547292 | orchestrator | 2026-01-01 01:29:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:33.547315 | orchestrator | 2026-01-01 01:29:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:36.605738 | orchestrator | 2026-01-01 01:29:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:36.608847 | orchestrator | 2026-01-01 01:29:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:36.608903 | orchestrator | 2026-01-01 01:29:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:39.664476 | orchestrator | 2026-01-01 01:29:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:39.666508 | orchestrator | 2026-01-01 01:29:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:39.666592 | orchestrator | 2026-01-01 01:29:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:42.710378 | orchestrator | 2026-01-01 01:29:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:42.712309 | orchestrator | 2026-01-01 01:29:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:42.712361 | orchestrator | 2026-01-01 01:29:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:45.770866 | orchestrator | 2026-01-01 01:29:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:45.773239 | orchestrator | 2026-01-01 01:29:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:45.773554 | orchestrator | 2026-01-01 01:29:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:48.822632 | orchestrator | 2026-01-01 01:29:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:48.824679 | orchestrator | 2026-01-01 01:29:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:48.824721 | orchestrator | 2026-01-01 01:29:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:51.874196 | orchestrator | 2026-01-01 01:29:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:51.875828 | orchestrator | 2026-01-01 01:29:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:51.875868 | orchestrator | 2026-01-01 01:29:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:54.925829 | orchestrator | 2026-01-01 01:29:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:54.927709 | orchestrator | 2026-01-01 01:29:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:54.927746 | orchestrator | 2026-01-01 01:29:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:29:57.978913 | orchestrator | 2026-01-01 01:29:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:29:57.980953 | orchestrator | 2026-01-01 01:29:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:29:57.980990 | orchestrator | 2026-01-01 01:29:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:01.039182 | orchestrator | 2026-01-01 01:30:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:01.039294 | orchestrator | 2026-01-01 01:30:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:01.039308 | orchestrator | 2026-01-01 01:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:04.090428 | orchestrator | 2026-01-01 01:30:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:04.092565 | orchestrator | 2026-01-01 01:30:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:04.092642 | orchestrator | 2026-01-01 01:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:07.138621 | orchestrator | 2026-01-01 01:30:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:07.140915 | orchestrator | 2026-01-01 01:30:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:07.141064 | orchestrator | 2026-01-01 01:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:10.193332 | orchestrator | 2026-01-01 01:30:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:10.194536 | orchestrator | 2026-01-01 01:30:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:10.194574 | orchestrator | 2026-01-01 01:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:13.237661 | orchestrator | 2026-01-01 01:30:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:13.238493 | orchestrator | 2026-01-01 01:30:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:13.238549 | orchestrator | 2026-01-01 01:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:16.288122 | orchestrator | 2026-01-01 01:30:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:16.290146 | orchestrator | 2026-01-01 01:30:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:16.290233 | orchestrator | 2026-01-01 01:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:19.341552 | orchestrator | 2026-01-01 01:30:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:19.343511 | orchestrator | 2026-01-01 01:30:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:19.343558 | orchestrator | 2026-01-01 01:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:22.394786 | orchestrator | 2026-01-01 01:30:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:22.395922 | orchestrator | 2026-01-01 01:30:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:22.396245 | orchestrator | 2026-01-01 01:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:25.452405 | orchestrator | 2026-01-01 01:30:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:25.454651 | orchestrator | 2026-01-01 01:30:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:25.454690 | orchestrator | 2026-01-01 01:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:28.502826 | orchestrator | 2026-01-01 01:30:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:28.504282 | orchestrator | 2026-01-01 01:30:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:28.504313 | orchestrator | 2026-01-01 01:30:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:31.545363 | orchestrator | 2026-01-01 01:30:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:31.546591 | orchestrator | 2026-01-01 01:30:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:31.546620 | orchestrator | 2026-01-01 01:30:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:34.595810 | orchestrator | 2026-01-01 01:30:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:34.596413 | orchestrator | 2026-01-01 01:30:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:34.597012 | orchestrator | 2026-01-01 01:30:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:37.644162 | orchestrator | 2026-01-01 01:30:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:37.646470 | orchestrator | 2026-01-01 01:30:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:37.646505 | orchestrator | 2026-01-01 01:30:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:40.699405 | orchestrator | 2026-01-01 01:30:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:40.701022 | orchestrator | 2026-01-01 01:30:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:40.701107 | orchestrator | 2026-01-01 01:30:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:43.751397 | orchestrator | 2026-01-01 01:30:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:43.753758 | orchestrator | 2026-01-01 01:30:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:43.753813 | orchestrator | 2026-01-01 01:30:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:46.807269 | orchestrator | 2026-01-01 01:30:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:46.808784 | orchestrator | 2026-01-01 01:30:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:46.808848 | orchestrator | 2026-01-01 01:30:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:49.859295 | orchestrator | 2026-01-01 01:30:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:49.860413 | orchestrator | 2026-01-01 01:30:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:49.860450 | orchestrator | 2026-01-01 01:30:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:52.909925 | orchestrator | 2026-01-01 01:30:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:52.911719 | orchestrator | 2026-01-01 01:30:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:52.911751 | orchestrator | 2026-01-01 01:30:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:55.954662 | orchestrator | 2026-01-01 01:30:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:55.955935 | orchestrator | 2026-01-01 01:30:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:55.955985 | orchestrator | 2026-01-01 01:30:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:30:59.007773 | orchestrator | 2026-01-01 01:30:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:30:59.010309 | orchestrator | 2026-01-01 01:30:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:30:59.010381 | orchestrator | 2026-01-01 01:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:02.055725 | orchestrator | 2026-01-01 01:31:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:02.055887 | orchestrator | 2026-01-01 01:31:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:02.055905 | orchestrator | 2026-01-01 01:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:05.109588 | orchestrator | 2026-01-01 01:31:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:05.112103 | orchestrator | 2026-01-01 01:31:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:05.112143 | orchestrator | 2026-01-01 01:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:08.155081 | orchestrator | 2026-01-01 01:31:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:08.157241 | orchestrator | 2026-01-01 01:31:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:08.157387 | orchestrator | 2026-01-01 01:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:11.208802 | orchestrator | 2026-01-01 01:31:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:11.209473 | orchestrator | 2026-01-01 01:31:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:11.209761 | orchestrator | 2026-01-01 01:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:14.264757 | orchestrator | 2026-01-01 01:31:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:14.266230 | orchestrator | 2026-01-01 01:31:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:14.267548 | orchestrator | 2026-01-01 01:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:17.311401 | orchestrator | 2026-01-01 01:31:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:17.312556 | orchestrator | 2026-01-01 01:31:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:17.312662 | orchestrator | 2026-01-01 01:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:20.360519 | orchestrator | 2026-01-01 01:31:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:20.362640 | orchestrator | 2026-01-01 01:31:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:20.362678 | orchestrator | 2026-01-01 01:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:23.418881 | orchestrator | 2026-01-01 01:31:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:23.421303 | orchestrator | 2026-01-01 01:31:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:23.421370 | orchestrator | 2026-01-01 01:31:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:26.475296 | orchestrator | 2026-01-01 01:31:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:26.477325 | orchestrator | 2026-01-01 01:31:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:26.477370 | orchestrator | 2026-01-01 01:31:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:29.519926 | orchestrator | 2026-01-01 01:31:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:29.521007 | orchestrator | 2026-01-01 01:31:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:29.521025 | orchestrator | 2026-01-01 01:31:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:32.566851 | orchestrator | 2026-01-01 01:31:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:32.567171 | orchestrator | 2026-01-01 01:31:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:32.567209 | orchestrator | 2026-01-01 01:31:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:35.614704 | orchestrator | 2026-01-01 01:31:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:35.615303 | orchestrator | 2026-01-01 01:31:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:35.615372 | orchestrator | 2026-01-01 01:31:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:38.661297 | orchestrator | 2026-01-01 01:31:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:38.664708 | orchestrator | 2026-01-01 01:31:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:38.664757 | orchestrator | 2026-01-01 01:31:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:41.710291 | orchestrator | 2026-01-01 01:31:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:41.712619 | orchestrator | 2026-01-01 01:31:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:41.712670 | orchestrator | 2026-01-01 01:31:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:44.763905 | orchestrator | 2026-01-01 01:31:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:44.765440 | orchestrator | 2026-01-01 01:31:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:44.765523 | orchestrator | 2026-01-01 01:31:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:47.818542 | orchestrator | 2026-01-01 01:31:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:47.820383 | orchestrator | 2026-01-01 01:31:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:47.820587 | orchestrator | 2026-01-01 01:31:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:50.873953 | orchestrator | 2026-01-01 01:31:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:50.876508 | orchestrator | 2026-01-01 01:31:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:50.876650 | orchestrator | 2026-01-01 01:31:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:53.925306 | orchestrator | 2026-01-01 01:31:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:53.927481 | orchestrator | 2026-01-01 01:31:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:53.927525 | orchestrator | 2026-01-01 01:31:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:31:56.981950 | orchestrator | 2026-01-01 01:31:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:31:56.985620 | orchestrator | 2026-01-01 01:31:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:31:56.985745 | orchestrator | 2026-01-01 01:31:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:00.034895 | orchestrator | 2026-01-01 01:32:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:00.036591 | orchestrator | 2026-01-01 01:32:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:00.036660 | orchestrator | 2026-01-01 01:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:03.079739 | orchestrator | 2026-01-01 01:32:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:03.081095 | orchestrator | 2026-01-01 01:32:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:03.081174 | orchestrator | 2026-01-01 01:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:06.131302 | orchestrator | 2026-01-01 01:32:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:06.131393 | orchestrator | 2026-01-01 01:32:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:06.131406 | orchestrator | 2026-01-01 01:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:09.177402 | orchestrator | 2026-01-01 01:32:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:09.179392 | orchestrator | 2026-01-01 01:32:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:09.179430 | orchestrator | 2026-01-01 01:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:12.235191 | orchestrator | 2026-01-01 01:32:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:12.237883 | orchestrator | 2026-01-01 01:32:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:12.237914 | orchestrator | 2026-01-01 01:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:15.285532 | orchestrator | 2026-01-01 01:32:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:15.288772 | orchestrator | 2026-01-01 01:32:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:15.289332 | orchestrator | 2026-01-01 01:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:18.338460 | orchestrator | 2026-01-01 01:32:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:18.340954 | orchestrator | 2026-01-01 01:32:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:18.341010 | orchestrator | 2026-01-01 01:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:21.393118 | orchestrator | 2026-01-01 01:32:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:21.395015 | orchestrator | 2026-01-01 01:32:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:21.395131 | orchestrator | 2026-01-01 01:32:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:24.445126 | orchestrator | 2026-01-01 01:32:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:24.446883 | orchestrator | 2026-01-01 01:32:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:24.446909 | orchestrator | 2026-01-01 01:32:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:27.501747 | orchestrator | 2026-01-01 01:32:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:27.504020 | orchestrator | 2026-01-01 01:32:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:27.504176 | orchestrator | 2026-01-01 01:32:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:30.556501 | orchestrator | 2026-01-01 01:32:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:30.558865 | orchestrator | 2026-01-01 01:32:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:30.559164 | orchestrator | 2026-01-01 01:32:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:33.612630 | orchestrator | 2026-01-01 01:32:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:33.614805 | orchestrator | 2026-01-01 01:32:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:33.614824 | orchestrator | 2026-01-01 01:32:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:36.659389 | orchestrator | 2026-01-01 01:32:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:36.661405 | orchestrator | 2026-01-01 01:32:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:36.661591 | orchestrator | 2026-01-01 01:32:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:39.712546 | orchestrator | 2026-01-01 01:32:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:39.714920 | orchestrator | 2026-01-01 01:32:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:39.714969 | orchestrator | 2026-01-01 01:32:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:42.769530 | orchestrator | 2026-01-01 01:32:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:42.771713 | orchestrator | 2026-01-01 01:32:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:42.771781 | orchestrator | 2026-01-01 01:32:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:45.815453 | orchestrator | 2026-01-01 01:32:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:45.817423 | orchestrator | 2026-01-01 01:32:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:45.817508 | orchestrator | 2026-01-01 01:32:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:48.870330 | orchestrator | 2026-01-01 01:32:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:48.871925 | orchestrator | 2026-01-01 01:32:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:48.871948 | orchestrator | 2026-01-01 01:32:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:51.926379 | orchestrator | 2026-01-01 01:32:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:51.928223 | orchestrator | 2026-01-01 01:32:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:51.928631 | orchestrator | 2026-01-01 01:32:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:54.980986 | orchestrator | 2026-01-01 01:32:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:54.982695 | orchestrator | 2026-01-01 01:32:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:54.982735 | orchestrator | 2026-01-01 01:32:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:32:58.022679 | orchestrator | 2026-01-01 01:32:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:32:58.023577 | orchestrator | 2026-01-01 01:32:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:32:58.023611 | orchestrator | 2026-01-01 01:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:01.073788 | orchestrator | 2026-01-01 01:33:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:01.074247 | orchestrator | 2026-01-01 01:33:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:01.074283 | orchestrator | 2026-01-01 01:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:04.116154 | orchestrator | 2026-01-01 01:33:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:04.117934 | orchestrator | 2026-01-01 01:33:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:04.117975 | orchestrator | 2026-01-01 01:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:07.160869 | orchestrator | 2026-01-01 01:33:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:07.163643 | orchestrator | 2026-01-01 01:33:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:07.163715 | orchestrator | 2026-01-01 01:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:10.216355 | orchestrator | 2026-01-01 01:33:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:10.218353 | orchestrator | 2026-01-01 01:33:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:10.218378 | orchestrator | 2026-01-01 01:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:13.270557 | orchestrator | 2026-01-01 01:33:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:13.272496 | orchestrator | 2026-01-01 01:33:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:13.272567 | orchestrator | 2026-01-01 01:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:16.325327 | orchestrator | 2026-01-01 01:33:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:16.326540 | orchestrator | 2026-01-01 01:33:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:16.326814 | orchestrator | 2026-01-01 01:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:19.377454 | orchestrator | 2026-01-01 01:33:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:19.379233 | orchestrator | 2026-01-01 01:33:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:19.379277 | orchestrator | 2026-01-01 01:33:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:22.432700 | orchestrator | 2026-01-01 01:33:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:22.434818 | orchestrator | 2026-01-01 01:33:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:22.434868 | orchestrator | 2026-01-01 01:33:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:25.481044 | orchestrator | 2026-01-01 01:33:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:25.482831 | orchestrator | 2026-01-01 01:33:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:25.482862 | orchestrator | 2026-01-01 01:33:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:28.534946 | orchestrator | 2026-01-01 01:33:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:28.536113 | orchestrator | 2026-01-01 01:33:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:28.536155 | orchestrator | 2026-01-01 01:33:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:31.586199 | orchestrator | 2026-01-01 01:33:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:31.588093 | orchestrator | 2026-01-01 01:33:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:31.588149 | orchestrator | 2026-01-01 01:33:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:34.640222 | orchestrator | 2026-01-01 01:33:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:34.642372 | orchestrator | 2026-01-01 01:33:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:34.642414 | orchestrator | 2026-01-01 01:33:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:37.698245 | orchestrator | 2026-01-01 01:33:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:37.699918 | orchestrator | 2026-01-01 01:33:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:37.699955 | orchestrator | 2026-01-01 01:33:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:40.752054 | orchestrator | 2026-01-01 01:33:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:40.754758 | orchestrator | 2026-01-01 01:33:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:40.754793 | orchestrator | 2026-01-01 01:33:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:43.807196 | orchestrator | 2026-01-01 01:33:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:43.808999 | orchestrator | 2026-01-01 01:33:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:43.809110 | orchestrator | 2026-01-01 01:33:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:46.857677 | orchestrator | 2026-01-01 01:33:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:46.859595 | orchestrator | 2026-01-01 01:33:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:46.859655 | orchestrator | 2026-01-01 01:33:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:49.905710 | orchestrator | 2026-01-01 01:33:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:49.908314 | orchestrator | 2026-01-01 01:33:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:49.908383 | orchestrator | 2026-01-01 01:33:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:52.960184 | orchestrator | 2026-01-01 01:33:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:52.961480 | orchestrator | 2026-01-01 01:33:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:52.961659 | orchestrator | 2026-01-01 01:33:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:56.020757 | orchestrator | 2026-01-01 01:33:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:56.024671 | orchestrator | 2026-01-01 01:33:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:56.024777 | orchestrator | 2026-01-01 01:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:33:59.068137 | orchestrator | 2026-01-01 01:33:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:33:59.070699 | orchestrator | 2026-01-01 01:33:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:33:59.070883 | orchestrator | 2026-01-01 01:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:02.123127 | orchestrator | 2026-01-01 01:34:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:02.124360 | orchestrator | 2026-01-01 01:34:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:02.124392 | orchestrator | 2026-01-01 01:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:05.177636 | orchestrator | 2026-01-01 01:34:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:05.178255 | orchestrator | 2026-01-01 01:34:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:05.178346 | orchestrator | 2026-01-01 01:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:08.225177 | orchestrator | 2026-01-01 01:34:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:08.227263 | orchestrator | 2026-01-01 01:34:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:08.227340 | orchestrator | 2026-01-01 01:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:11.280378 | orchestrator | 2026-01-01 01:34:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:11.281943 | orchestrator | 2026-01-01 01:34:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:11.282007 | orchestrator | 2026-01-01 01:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:14.325454 | orchestrator | 2026-01-01 01:34:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:14.325580 | orchestrator | 2026-01-01 01:34:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:14.325594 | orchestrator | 2026-01-01 01:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:17.371375 | orchestrator | 2026-01-01 01:34:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:17.373784 | orchestrator | 2026-01-01 01:34:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:17.373895 | orchestrator | 2026-01-01 01:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:20.423098 | orchestrator | 2026-01-01 01:34:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:20.426510 | orchestrator | 2026-01-01 01:34:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:20.426558 | orchestrator | 2026-01-01 01:34:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:23.475264 | orchestrator | 2026-01-01 01:34:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:23.477397 | orchestrator | 2026-01-01 01:34:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:23.477443 | orchestrator | 2026-01-01 01:34:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:26.517972 | orchestrator | 2026-01-01 01:34:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:26.520797 | orchestrator | 2026-01-01 01:34:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:26.521109 | orchestrator | 2026-01-01 01:34:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:29.570889 | orchestrator | 2026-01-01 01:34:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:29.571710 | orchestrator | 2026-01-01 01:34:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:29.571955 | orchestrator | 2026-01-01 01:34:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:32.622520 | orchestrator | 2026-01-01 01:34:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:32.623680 | orchestrator | 2026-01-01 01:34:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:32.623710 | orchestrator | 2026-01-01 01:34:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:35.675774 | orchestrator | 2026-01-01 01:34:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:35.678320 | orchestrator | 2026-01-01 01:34:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:35.678377 | orchestrator | 2026-01-01 01:34:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:38.729207 | orchestrator | 2026-01-01 01:34:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:38.730809 | orchestrator | 2026-01-01 01:34:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:38.730874 | orchestrator | 2026-01-01 01:34:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:41.775946 | orchestrator | 2026-01-01 01:34:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:41.777659 | orchestrator | 2026-01-01 01:34:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:41.777718 | orchestrator | 2026-01-01 01:34:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:44.825252 | orchestrator | 2026-01-01 01:34:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:44.826632 | orchestrator | 2026-01-01 01:34:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:44.826726 | orchestrator | 2026-01-01 01:34:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:47.874159 | orchestrator | 2026-01-01 01:34:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:47.876210 | orchestrator | 2026-01-01 01:34:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:47.876798 | orchestrator | 2026-01-01 01:34:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:50.925656 | orchestrator | 2026-01-01 01:34:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:50.926908 | orchestrator | 2026-01-01 01:34:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:50.926947 | orchestrator | 2026-01-01 01:34:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:53.980641 | orchestrator | 2026-01-01 01:34:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:53.982224 | orchestrator | 2026-01-01 01:34:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:53.982922 | orchestrator | 2026-01-01 01:34:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:34:57.040306 | orchestrator | 2026-01-01 01:34:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:34:57.041462 | orchestrator | 2026-01-01 01:34:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:34:57.041496 | orchestrator | 2026-01-01 01:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:00.086285 | orchestrator | 2026-01-01 01:35:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:00.088874 | orchestrator | 2026-01-01 01:35:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:00.089236 | orchestrator | 2026-01-01 01:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:03.131355 | orchestrator | 2026-01-01 01:35:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:03.133656 | orchestrator | 2026-01-01 01:35:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:03.133713 | orchestrator | 2026-01-01 01:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:06.184159 | orchestrator | 2026-01-01 01:35:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:06.186262 | orchestrator | 2026-01-01 01:35:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:06.186320 | orchestrator | 2026-01-01 01:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:09.235426 | orchestrator | 2026-01-01 01:35:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:09.237714 | orchestrator | 2026-01-01 01:35:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:09.237808 | orchestrator | 2026-01-01 01:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:12.288700 | orchestrator | 2026-01-01 01:35:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:12.291266 | orchestrator | 2026-01-01 01:35:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:12.291384 | orchestrator | 2026-01-01 01:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:15.330789 | orchestrator | 2026-01-01 01:35:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:15.332019 | orchestrator | 2026-01-01 01:35:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:15.332100 | orchestrator | 2026-01-01 01:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:18.383782 | orchestrator | 2026-01-01 01:35:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:18.386322 | orchestrator | 2026-01-01 01:35:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:18.386343 | orchestrator | 2026-01-01 01:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:21.438589 | orchestrator | 2026-01-01 01:35:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:21.440859 | orchestrator | 2026-01-01 01:35:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:21.441096 | orchestrator | 2026-01-01 01:35:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:24.487012 | orchestrator | 2026-01-01 01:35:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:24.487769 | orchestrator | 2026-01-01 01:35:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:24.487802 | orchestrator | 2026-01-01 01:35:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:27.537841 | orchestrator | 2026-01-01 01:35:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:27.539152 | orchestrator | 2026-01-01 01:35:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:27.539246 | orchestrator | 2026-01-01 01:35:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:30.589737 | orchestrator | 2026-01-01 01:35:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:30.591420 | orchestrator | 2026-01-01 01:35:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:30.591488 | orchestrator | 2026-01-01 01:35:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:33.650719 | orchestrator | 2026-01-01 01:35:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:33.652158 | orchestrator | 2026-01-01 01:35:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:33.652282 | orchestrator | 2026-01-01 01:35:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:36.698574 | orchestrator | 2026-01-01 01:35:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:36.699482 | orchestrator | 2026-01-01 01:35:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:36.699519 | orchestrator | 2026-01-01 01:35:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:39.755259 | orchestrator | 2026-01-01 01:35:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:39.757134 | orchestrator | 2026-01-01 01:35:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:39.757254 | orchestrator | 2026-01-01 01:35:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:42.808736 | orchestrator | 2026-01-01 01:35:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:42.810163 | orchestrator | 2026-01-01 01:35:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:42.810207 | orchestrator | 2026-01-01 01:35:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:45.859228 | orchestrator | 2026-01-01 01:35:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:45.862817 | orchestrator | 2026-01-01 01:35:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:45.862880 | orchestrator | 2026-01-01 01:35:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:48.912546 | orchestrator | 2026-01-01 01:35:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:48.914579 | orchestrator | 2026-01-01 01:35:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:48.914744 | orchestrator | 2026-01-01 01:35:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:51.960674 | orchestrator | 2026-01-01 01:35:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:51.961230 | orchestrator | 2026-01-01 01:35:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:51.961257 | orchestrator | 2026-01-01 01:35:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:55.013972 | orchestrator | 2026-01-01 01:35:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:55.016688 | orchestrator | 2026-01-01 01:35:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:55.016722 | orchestrator | 2026-01-01 01:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:35:58.067365 | orchestrator | 2026-01-01 01:35:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:35:58.067570 | orchestrator | 2026-01-01 01:35:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:35:58.067593 | orchestrator | 2026-01-01 01:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:01.120432 | orchestrator | 2026-01-01 01:36:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:01.123850 | orchestrator | 2026-01-01 01:36:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:01.123896 | orchestrator | 2026-01-01 01:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:04.173492 | orchestrator | 2026-01-01 01:36:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:04.175461 | orchestrator | 2026-01-01 01:36:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:04.175503 | orchestrator | 2026-01-01 01:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:07.226781 | orchestrator | 2026-01-01 01:36:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:07.229508 | orchestrator | 2026-01-01 01:36:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:07.229802 | orchestrator | 2026-01-01 01:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:10.279197 | orchestrator | 2026-01-01 01:36:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:10.280511 | orchestrator | 2026-01-01 01:36:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:10.280670 | orchestrator | 2026-01-01 01:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:13.332259 | orchestrator | 2026-01-01 01:36:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:13.334176 | orchestrator | 2026-01-01 01:36:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:13.334238 | orchestrator | 2026-01-01 01:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:16.378744 | orchestrator | 2026-01-01 01:36:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:16.381259 | orchestrator | 2026-01-01 01:36:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:16.381312 | orchestrator | 2026-01-01 01:36:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:19.435248 | orchestrator | 2026-01-01 01:36:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:19.437435 | orchestrator | 2026-01-01 01:36:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:19.437481 | orchestrator | 2026-01-01 01:36:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:22.484589 | orchestrator | 2026-01-01 01:36:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:22.485947 | orchestrator | 2026-01-01 01:36:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:22.486144 | orchestrator | 2026-01-01 01:36:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:25.533561 | orchestrator | 2026-01-01 01:36:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:25.534837 | orchestrator | 2026-01-01 01:36:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:25.534879 | orchestrator | 2026-01-01 01:36:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:28.588316 | orchestrator | 2026-01-01 01:36:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:28.590277 | orchestrator | 2026-01-01 01:36:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:28.590367 | orchestrator | 2026-01-01 01:36:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:31.643555 | orchestrator | 2026-01-01 01:36:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:31.645978 | orchestrator | 2026-01-01 01:36:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:31.646133 | orchestrator | 2026-01-01 01:36:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:34.699464 | orchestrator | 2026-01-01 01:36:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:34.700866 | orchestrator | 2026-01-01 01:36:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:34.701143 | orchestrator | 2026-01-01 01:36:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:37.745021 | orchestrator | 2026-01-01 01:36:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:37.747107 | orchestrator | 2026-01-01 01:36:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:37.747146 | orchestrator | 2026-01-01 01:36:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:40.793779 | orchestrator | 2026-01-01 01:36:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:40.795215 | orchestrator | 2026-01-01 01:36:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:40.795295 | orchestrator | 2026-01-01 01:36:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:43.843469 | orchestrator | 2026-01-01 01:36:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:43.844542 | orchestrator | 2026-01-01 01:36:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:43.844578 | orchestrator | 2026-01-01 01:36:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:46.894308 | orchestrator | 2026-01-01 01:36:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:46.895389 | orchestrator | 2026-01-01 01:36:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:46.895447 | orchestrator | 2026-01-01 01:36:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:49.956189 | orchestrator | 2026-01-01 01:36:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:49.958711 | orchestrator | 2026-01-01 01:36:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:49.958755 | orchestrator | 2026-01-01 01:36:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:53.010117 | orchestrator | 2026-01-01 01:36:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:53.011538 | orchestrator | 2026-01-01 01:36:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:53.011574 | orchestrator | 2026-01-01 01:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:56.063185 | orchestrator | 2026-01-01 01:36:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:56.065763 | orchestrator | 2026-01-01 01:36:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:56.066679 | orchestrator | 2026-01-01 01:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:36:59.110709 | orchestrator | 2026-01-01 01:36:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:36:59.112949 | orchestrator | 2026-01-01 01:36:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:36:59.113002 | orchestrator | 2026-01-01 01:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:02.148313 | orchestrator | 2026-01-01 01:37:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:02.149587 | orchestrator | 2026-01-01 01:37:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:02.149618 | orchestrator | 2026-01-01 01:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:05.202286 | orchestrator | 2026-01-01 01:37:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:05.203998 | orchestrator | 2026-01-01 01:37:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:05.204429 | orchestrator | 2026-01-01 01:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:08.252672 | orchestrator | 2026-01-01 01:37:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:08.256781 | orchestrator | 2026-01-01 01:37:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:08.256822 | orchestrator | 2026-01-01 01:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:11.299684 | orchestrator | 2026-01-01 01:37:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:11.302382 | orchestrator | 2026-01-01 01:37:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:11.302420 | orchestrator | 2026-01-01 01:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:14.349315 | orchestrator | 2026-01-01 01:37:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:14.349433 | orchestrator | 2026-01-01 01:37:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:14.349456 | orchestrator | 2026-01-01 01:37:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:17.397922 | orchestrator | 2026-01-01 01:37:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:17.399319 | orchestrator | 2026-01-01 01:37:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:17.399356 | orchestrator | 2026-01-01 01:37:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:20.450678 | orchestrator | 2026-01-01 01:37:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:20.453149 | orchestrator | 2026-01-01 01:37:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:20.453190 | orchestrator | 2026-01-01 01:37:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:23.498646 | orchestrator | 2026-01-01 01:37:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:23.500242 | orchestrator | 2026-01-01 01:37:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:23.500391 | orchestrator | 2026-01-01 01:37:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:26.545982 | orchestrator | 2026-01-01 01:37:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:26.549022 | orchestrator | 2026-01-01 01:37:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:26.549099 | orchestrator | 2026-01-01 01:37:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:29.600238 | orchestrator | 2026-01-01 01:37:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:29.601929 | orchestrator | 2026-01-01 01:37:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:29.602266 | orchestrator | 2026-01-01 01:37:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:32.652455 | orchestrator | 2026-01-01 01:37:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:32.653163 | orchestrator | 2026-01-01 01:37:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:32.653326 | orchestrator | 2026-01-01 01:37:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:35.703194 | orchestrator | 2026-01-01 01:37:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:35.707927 | orchestrator | 2026-01-01 01:37:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:35.707968 | orchestrator | 2026-01-01 01:37:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:38.757500 | orchestrator | 2026-01-01 01:37:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:38.758698 | orchestrator | 2026-01-01 01:37:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:38.758731 | orchestrator | 2026-01-01 01:37:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:41.805007 | orchestrator | 2026-01-01 01:37:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:41.807497 | orchestrator | 2026-01-01 01:37:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:41.807544 | orchestrator | 2026-01-01 01:37:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:44.859960 | orchestrator | 2026-01-01 01:37:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:44.861872 | orchestrator | 2026-01-01 01:37:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:44.861911 | orchestrator | 2026-01-01 01:37:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:47.910725 | orchestrator | 2026-01-01 01:37:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:47.912379 | orchestrator | 2026-01-01 01:37:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:47.912429 | orchestrator | 2026-01-01 01:37:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:50.965887 | orchestrator | 2026-01-01 01:37:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:50.967763 | orchestrator | 2026-01-01 01:37:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:50.967815 | orchestrator | 2026-01-01 01:37:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:54.023274 | orchestrator | 2026-01-01 01:37:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:54.026298 | orchestrator | 2026-01-01 01:37:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:54.026711 | orchestrator | 2026-01-01 01:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:37:57.074128 | orchestrator | 2026-01-01 01:37:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:37:57.075386 | orchestrator | 2026-01-01 01:37:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:37:57.075421 | orchestrator | 2026-01-01 01:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:00.123721 | orchestrator | 2026-01-01 01:38:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:00.125249 | orchestrator | 2026-01-01 01:38:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:00.125275 | orchestrator | 2026-01-01 01:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:03.173211 | orchestrator | 2026-01-01 01:38:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:03.176078 | orchestrator | 2026-01-01 01:38:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:03.176162 | orchestrator | 2026-01-01 01:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:06.229189 | orchestrator | 2026-01-01 01:38:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:06.230737 | orchestrator | 2026-01-01 01:38:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:06.230877 | orchestrator | 2026-01-01 01:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:09.283722 | orchestrator | 2026-01-01 01:38:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:09.286357 | orchestrator | 2026-01-01 01:38:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:09.286430 | orchestrator | 2026-01-01 01:38:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:12.333386 | orchestrator | 2026-01-01 01:38:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:12.335605 | orchestrator | 2026-01-01 01:38:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:12.335682 | orchestrator | 2026-01-01 01:38:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:15.378233 | orchestrator | 2026-01-01 01:38:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:15.379067 | orchestrator | 2026-01-01 01:38:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:15.379175 | orchestrator | 2026-01-01 01:38:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:18.426387 | orchestrator | 2026-01-01 01:38:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:18.429424 | orchestrator | 2026-01-01 01:38:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:18.429465 | orchestrator | 2026-01-01 01:38:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:21.483306 | orchestrator | 2026-01-01 01:38:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:21.485356 | orchestrator | 2026-01-01 01:38:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:21.485429 | orchestrator | 2026-01-01 01:38:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:24.537131 | orchestrator | 2026-01-01 01:38:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:24.538480 | orchestrator | 2026-01-01 01:38:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:24.538522 | orchestrator | 2026-01-01 01:38:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:27.587654 | orchestrator | 2026-01-01 01:38:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:27.589544 | orchestrator | 2026-01-01 01:38:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:27.589687 | orchestrator | 2026-01-01 01:38:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:30.639425 | orchestrator | 2026-01-01 01:38:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:30.641576 | orchestrator | 2026-01-01 01:38:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:30.641650 | orchestrator | 2026-01-01 01:38:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:33.691244 | orchestrator | 2026-01-01 01:38:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:33.692324 | orchestrator | 2026-01-01 01:38:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:33.692383 | orchestrator | 2026-01-01 01:38:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:36.727560 | orchestrator | 2026-01-01 01:38:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:36.729450 | orchestrator | 2026-01-01 01:38:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:36.729501 | orchestrator | 2026-01-01 01:38:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:39.776585 | orchestrator | 2026-01-01 01:38:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:39.777114 | orchestrator | 2026-01-01 01:38:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:39.777134 | orchestrator | 2026-01-01 01:38:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:42.819284 | orchestrator | 2026-01-01 01:38:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:42.821043 | orchestrator | 2026-01-01 01:38:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:42.821060 | orchestrator | 2026-01-01 01:38:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:45.874503 | orchestrator | 2026-01-01 01:38:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:45.876789 | orchestrator | 2026-01-01 01:38:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:45.876861 | orchestrator | 2026-01-01 01:38:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:48.925178 | orchestrator | 2026-01-01 01:38:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:48.926755 | orchestrator | 2026-01-01 01:38:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:48.926771 | orchestrator | 2026-01-01 01:38:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:51.979927 | orchestrator | 2026-01-01 01:38:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:51.982507 | orchestrator | 2026-01-01 01:38:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:51.982608 | orchestrator | 2026-01-01 01:38:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:55.038414 | orchestrator | 2026-01-01 01:38:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:55.041056 | orchestrator | 2026-01-01 01:38:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:55.041150 | orchestrator | 2026-01-01 01:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:38:58.089267 | orchestrator | 2026-01-01 01:38:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:38:58.091312 | orchestrator | 2026-01-01 01:38:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:38:58.091356 | orchestrator | 2026-01-01 01:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:01.138585 | orchestrator | 2026-01-01 01:39:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:01.139536 | orchestrator | 2026-01-01 01:39:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:01.139721 | orchestrator | 2026-01-01 01:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:04.194894 | orchestrator | 2026-01-01 01:39:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:04.196934 | orchestrator | 2026-01-01 01:39:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:04.196984 | orchestrator | 2026-01-01 01:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:07.253095 | orchestrator | 2026-01-01 01:39:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:07.259053 | orchestrator | 2026-01-01 01:39:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:07.259166 | orchestrator | 2026-01-01 01:39:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:10.310449 | orchestrator | 2026-01-01 01:39:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:10.313592 | orchestrator | 2026-01-01 01:39:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:10.313758 | orchestrator | 2026-01-01 01:39:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:13.366689 | orchestrator | 2026-01-01 01:39:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:13.369493 | orchestrator | 2026-01-01 01:39:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:13.369564 | orchestrator | 2026-01-01 01:39:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:16.413406 | orchestrator | 2026-01-01 01:39:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:16.417192 | orchestrator | 2026-01-01 01:39:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:16.417251 | orchestrator | 2026-01-01 01:39:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:19.475209 | orchestrator | 2026-01-01 01:39:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:19.477621 | orchestrator | 2026-01-01 01:39:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:19.477662 | orchestrator | 2026-01-01 01:39:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:22.523303 | orchestrator | 2026-01-01 01:39:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:22.525643 | orchestrator | 2026-01-01 01:39:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:22.525875 | orchestrator | 2026-01-01 01:39:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:25.574774 | orchestrator | 2026-01-01 01:39:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:25.662915 | orchestrator | 2026-01-01 01:39:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:25.662944 | orchestrator | 2026-01-01 01:39:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:28.623126 | orchestrator | 2026-01-01 01:39:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:28.625168 | orchestrator | 2026-01-01 01:39:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:28.625210 | orchestrator | 2026-01-01 01:39:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:31.674739 | orchestrator | 2026-01-01 01:39:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:31.675855 | orchestrator | 2026-01-01 01:39:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:31.676007 | orchestrator | 2026-01-01 01:39:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:34.732387 | orchestrator | 2026-01-01 01:39:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:34.734555 | orchestrator | 2026-01-01 01:39:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:34.734709 | orchestrator | 2026-01-01 01:39:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:37.780333 | orchestrator | 2026-01-01 01:39:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:37.782665 | orchestrator | 2026-01-01 01:39:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:37.782798 | orchestrator | 2026-01-01 01:39:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:40.833595 | orchestrator | 2026-01-01 01:39:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:40.982421 | orchestrator | 2026-01-01 01:39:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:40.982495 | orchestrator | 2026-01-01 01:39:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:43.885172 | orchestrator | 2026-01-01 01:39:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:43.886667 | orchestrator | 2026-01-01 01:39:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:43.886744 | orchestrator | 2026-01-01 01:39:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:46.935866 | orchestrator | 2026-01-01 01:39:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:46.937281 | orchestrator | 2026-01-01 01:39:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:46.937335 | orchestrator | 2026-01-01 01:39:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:49.984790 | orchestrator | 2026-01-01 01:39:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:49.986380 | orchestrator | 2026-01-01 01:39:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:49.986435 | orchestrator | 2026-01-01 01:39:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:53.045272 | orchestrator | 2026-01-01 01:39:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:53.047926 | orchestrator | 2026-01-01 01:39:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:53.047988 | orchestrator | 2026-01-01 01:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:56.092886 | orchestrator | 2026-01-01 01:39:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:56.094856 | orchestrator | 2026-01-01 01:39:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:56.095260 | orchestrator | 2026-01-01 01:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:39:59.144827 | orchestrator | 2026-01-01 01:39:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:39:59.147513 | orchestrator | 2026-01-01 01:39:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:39:59.147638 | orchestrator | 2026-01-01 01:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:02.196483 | orchestrator | 2026-01-01 01:40:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:02.197922 | orchestrator | 2026-01-01 01:40:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:02.197969 | orchestrator | 2026-01-01 01:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:05.244880 | orchestrator | 2026-01-01 01:40:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:05.246096 | orchestrator | 2026-01-01 01:40:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:05.246279 | orchestrator | 2026-01-01 01:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:08.291444 | orchestrator | 2026-01-01 01:40:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:08.293077 | orchestrator | 2026-01-01 01:40:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:08.293317 | orchestrator | 2026-01-01 01:40:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:11.334636 | orchestrator | 2026-01-01 01:40:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:11.336591 | orchestrator | 2026-01-01 01:40:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:11.336632 | orchestrator | 2026-01-01 01:40:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:14.386404 | orchestrator | 2026-01-01 01:40:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:14.387039 | orchestrator | 2026-01-01 01:40:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:14.387067 | orchestrator | 2026-01-01 01:40:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:17.439861 | orchestrator | 2026-01-01 01:40:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:17.442933 | orchestrator | 2026-01-01 01:40:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:17.442985 | orchestrator | 2026-01-01 01:40:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:20.494853 | orchestrator | 2026-01-01 01:40:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:20.495858 | orchestrator | 2026-01-01 01:40:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:20.495894 | orchestrator | 2026-01-01 01:40:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:23.542771 | orchestrator | 2026-01-01 01:40:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:23.544420 | orchestrator | 2026-01-01 01:40:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:23.544460 | orchestrator | 2026-01-01 01:40:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:26.592606 | orchestrator | 2026-01-01 01:40:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:26.596668 | orchestrator | 2026-01-01 01:40:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:26.596757 | orchestrator | 2026-01-01 01:40:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:29.653487 | orchestrator | 2026-01-01 01:40:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:29.655906 | orchestrator | 2026-01-01 01:40:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:29.655940 | orchestrator | 2026-01-01 01:40:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:32.712078 | orchestrator | 2026-01-01 01:40:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:32.713622 | orchestrator | 2026-01-01 01:40:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:32.713674 | orchestrator | 2026-01-01 01:40:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:35.757474 | orchestrator | 2026-01-01 01:40:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:35.759631 | orchestrator | 2026-01-01 01:40:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:35.759687 | orchestrator | 2026-01-01 01:40:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:38.800860 | orchestrator | 2026-01-01 01:40:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:38.802977 | orchestrator | 2026-01-01 01:40:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:38.803048 | orchestrator | 2026-01-01 01:40:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:41.846350 | orchestrator | 2026-01-01 01:40:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:41.847605 | orchestrator | 2026-01-01 01:40:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:41.847799 | orchestrator | 2026-01-01 01:40:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:44.888727 | orchestrator | 2026-01-01 01:40:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:44.890942 | orchestrator | 2026-01-01 01:40:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:44.891004 | orchestrator | 2026-01-01 01:40:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:47.940610 | orchestrator | 2026-01-01 01:40:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:47.942801 | orchestrator | 2026-01-01 01:40:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:47.942940 | orchestrator | 2026-01-01 01:40:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:50.986510 | orchestrator | 2026-01-01 01:40:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:50.987766 | orchestrator | 2026-01-01 01:40:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:50.987801 | orchestrator | 2026-01-01 01:40:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:54.041645 | orchestrator | 2026-01-01 01:40:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:54.044872 | orchestrator | 2026-01-01 01:40:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:54.044936 | orchestrator | 2026-01-01 01:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:40:57.094415 | orchestrator | 2026-01-01 01:40:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:40:57.096422 | orchestrator | 2026-01-01 01:40:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:40:57.096742 | orchestrator | 2026-01-01 01:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:00.149331 | orchestrator | 2026-01-01 01:41:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:00.150750 | orchestrator | 2026-01-01 01:41:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:00.150812 | orchestrator | 2026-01-01 01:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:03.206573 | orchestrator | 2026-01-01 01:41:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:03.210157 | orchestrator | 2026-01-01 01:41:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:03.210203 | orchestrator | 2026-01-01 01:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:06.259178 | orchestrator | 2026-01-01 01:41:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:06.260703 | orchestrator | 2026-01-01 01:41:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:06.260737 | orchestrator | 2026-01-01 01:41:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:09.316223 | orchestrator | 2026-01-01 01:41:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:09.317400 | orchestrator | 2026-01-01 01:41:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:09.317440 | orchestrator | 2026-01-01 01:41:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:12.365205 | orchestrator | 2026-01-01 01:41:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:12.365546 | orchestrator | 2026-01-01 01:41:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:12.365583 | orchestrator | 2026-01-01 01:41:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:15.427586 | orchestrator | 2026-01-01 01:41:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:15.428457 | orchestrator | 2026-01-01 01:41:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:15.428678 | orchestrator | 2026-01-01 01:41:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:18.470612 | orchestrator | 2026-01-01 01:41:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:18.471657 | orchestrator | 2026-01-01 01:41:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:18.471695 | orchestrator | 2026-01-01 01:41:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:21.515902 | orchestrator | 2026-01-01 01:41:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:21.516859 | orchestrator | 2026-01-01 01:41:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:21.517349 | orchestrator | 2026-01-01 01:41:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:24.559260 | orchestrator | 2026-01-01 01:41:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:24.560251 | orchestrator | 2026-01-01 01:41:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:24.560344 | orchestrator | 2026-01-01 01:41:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:27.602650 | orchestrator | 2026-01-01 01:41:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:27.604190 | orchestrator | 2026-01-01 01:41:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:27.604265 | orchestrator | 2026-01-01 01:41:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:30.648561 | orchestrator | 2026-01-01 01:41:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:30.648737 | orchestrator | 2026-01-01 01:41:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:30.648759 | orchestrator | 2026-01-01 01:41:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:33.691479 | orchestrator | 2026-01-01 01:41:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:33.693283 | orchestrator | 2026-01-01 01:41:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:33.693320 | orchestrator | 2026-01-01 01:41:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:36.741640 | orchestrator | 2026-01-01 01:41:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:36.746712 | orchestrator | 2026-01-01 01:41:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:36.746792 | orchestrator | 2026-01-01 01:41:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:39.799242 | orchestrator | 2026-01-01 01:41:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:39.802072 | orchestrator | 2026-01-01 01:41:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:39.802125 | orchestrator | 2026-01-01 01:41:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:42.852659 | orchestrator | 2026-01-01 01:41:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:42.854259 | orchestrator | 2026-01-01 01:41:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:42.854309 | orchestrator | 2026-01-01 01:41:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:45.906955 | orchestrator | 2026-01-01 01:41:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:45.908790 | orchestrator | 2026-01-01 01:41:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:45.908842 | orchestrator | 2026-01-01 01:41:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:48.950715 | orchestrator | 2026-01-01 01:41:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:48.955245 | orchestrator | 2026-01-01 01:41:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:48.955304 | orchestrator | 2026-01-01 01:41:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:52.004582 | orchestrator | 2026-01-01 01:41:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:52.007275 | orchestrator | 2026-01-01 01:41:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:52.007386 | orchestrator | 2026-01-01 01:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:55.052337 | orchestrator | 2026-01-01 01:41:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:55.055542 | orchestrator | 2026-01-01 01:41:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:55.055644 | orchestrator | 2026-01-01 01:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:41:58.099741 | orchestrator | 2026-01-01 01:41:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:41:58.101518 | orchestrator | 2026-01-01 01:41:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:41:58.102316 | orchestrator | 2026-01-01 01:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:01.147883 | orchestrator | 2026-01-01 01:42:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:01.148293 | orchestrator | 2026-01-01 01:42:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:01.148323 | orchestrator | 2026-01-01 01:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:04.193132 | orchestrator | 2026-01-01 01:42:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:04.194981 | orchestrator | 2026-01-01 01:42:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:04.194998 | orchestrator | 2026-01-01 01:42:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:07.238473 | orchestrator | 2026-01-01 01:42:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:07.239658 | orchestrator | 2026-01-01 01:42:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:07.239697 | orchestrator | 2026-01-01 01:42:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:10.291621 | orchestrator | 2026-01-01 01:42:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:10.292964 | orchestrator | 2026-01-01 01:42:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:10.293195 | orchestrator | 2026-01-01 01:42:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:13.342438 | orchestrator | 2026-01-01 01:42:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:13.345304 | orchestrator | 2026-01-01 01:42:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:13.345405 | orchestrator | 2026-01-01 01:42:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:16.386296 | orchestrator | 2026-01-01 01:42:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:16.388109 | orchestrator | 2026-01-01 01:42:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:16.388181 | orchestrator | 2026-01-01 01:42:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:19.439292 | orchestrator | 2026-01-01 01:42:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:19.442068 | orchestrator | 2026-01-01 01:42:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:19.442090 | orchestrator | 2026-01-01 01:42:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:22.484877 | orchestrator | 2026-01-01 01:42:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:22.485709 | orchestrator | 2026-01-01 01:42:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:22.485728 | orchestrator | 2026-01-01 01:42:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:25.527511 | orchestrator | 2026-01-01 01:42:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:25.529056 | orchestrator | 2026-01-01 01:42:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:25.529076 | orchestrator | 2026-01-01 01:42:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:28.575772 | orchestrator | 2026-01-01 01:42:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:28.580524 | orchestrator | 2026-01-01 01:42:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:28.580594 | orchestrator | 2026-01-01 01:42:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:31.625383 | orchestrator | 2026-01-01 01:42:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:31.626122 | orchestrator | 2026-01-01 01:42:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:31.626183 | orchestrator | 2026-01-01 01:42:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:34.681283 | orchestrator | 2026-01-01 01:42:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:34.682227 | orchestrator | 2026-01-01 01:42:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:34.682261 | orchestrator | 2026-01-01 01:42:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:37.732586 | orchestrator | 2026-01-01 01:42:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:37.734643 | orchestrator | 2026-01-01 01:42:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:37.734956 | orchestrator | 2026-01-01 01:42:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:40.786219 | orchestrator | 2026-01-01 01:42:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:40.787843 | orchestrator | 2026-01-01 01:42:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:40.787876 | orchestrator | 2026-01-01 01:42:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:43.835488 | orchestrator | 2026-01-01 01:42:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:43.838100 | orchestrator | 2026-01-01 01:42:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:43.838187 | orchestrator | 2026-01-01 01:42:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:46.891552 | orchestrator | 2026-01-01 01:42:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:46.892995 | orchestrator | 2026-01-01 01:42:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:46.893021 | orchestrator | 2026-01-01 01:42:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:49.941451 | orchestrator | 2026-01-01 01:42:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:49.943777 | orchestrator | 2026-01-01 01:42:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:49.943814 | orchestrator | 2026-01-01 01:42:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:52.987628 | orchestrator | 2026-01-01 01:42:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:52.989439 | orchestrator | 2026-01-01 01:42:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:52.989475 | orchestrator | 2026-01-01 01:42:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:56.041412 | orchestrator | 2026-01-01 01:42:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:56.041588 | orchestrator | 2026-01-01 01:42:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:56.041609 | orchestrator | 2026-01-01 01:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:42:59.084615 | orchestrator | 2026-01-01 01:42:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:42:59.086429 | orchestrator | 2026-01-01 01:42:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:42:59.086510 | orchestrator | 2026-01-01 01:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:02.127764 | orchestrator | 2026-01-01 01:43:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:02.130772 | orchestrator | 2026-01-01 01:43:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:02.130831 | orchestrator | 2026-01-01 01:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:05.179834 | orchestrator | 2026-01-01 01:43:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:05.182493 | orchestrator | 2026-01-01 01:43:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:05.182582 | orchestrator | 2026-01-01 01:43:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:08.234655 | orchestrator | 2026-01-01 01:43:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:08.236830 | orchestrator | 2026-01-01 01:43:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:08.236862 | orchestrator | 2026-01-01 01:43:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:11.289225 | orchestrator | 2026-01-01 01:43:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:11.291797 | orchestrator | 2026-01-01 01:43:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:11.291867 | orchestrator | 2026-01-01 01:43:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:14.343332 | orchestrator | 2026-01-01 01:43:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:14.344230 | orchestrator | 2026-01-01 01:43:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:14.344264 | orchestrator | 2026-01-01 01:43:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:17.398160 | orchestrator | 2026-01-01 01:43:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:17.400165 | orchestrator | 2026-01-01 01:43:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:17.400208 | orchestrator | 2026-01-01 01:43:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:20.458342 | orchestrator | 2026-01-01 01:43:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:20.460593 | orchestrator | 2026-01-01 01:43:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:20.460635 | orchestrator | 2026-01-01 01:43:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:23.510823 | orchestrator | 2026-01-01 01:43:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:23.513191 | orchestrator | 2026-01-01 01:43:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:23.513247 | orchestrator | 2026-01-01 01:43:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:26.565035 | orchestrator | 2026-01-01 01:43:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:26.566923 | orchestrator | 2026-01-01 01:43:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:26.567305 | orchestrator | 2026-01-01 01:43:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:29.619616 | orchestrator | 2026-01-01 01:43:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:29.621437 | orchestrator | 2026-01-01 01:43:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:29.621488 | orchestrator | 2026-01-01 01:43:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:32.668997 | orchestrator | 2026-01-01 01:43:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:32.669636 | orchestrator | 2026-01-01 01:43:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:32.669698 | orchestrator | 2026-01-01 01:43:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:35.731141 | orchestrator | 2026-01-01 01:43:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:35.732889 | orchestrator | 2026-01-01 01:43:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:35.733012 | orchestrator | 2026-01-01 01:43:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:38.777676 | orchestrator | 2026-01-01 01:43:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:38.779198 | orchestrator | 2026-01-01 01:43:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:38.779408 | orchestrator | 2026-01-01 01:43:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:41.825388 | orchestrator | 2026-01-01 01:43:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:41.826850 | orchestrator | 2026-01-01 01:43:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:41.826891 | orchestrator | 2026-01-01 01:43:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:44.874237 | orchestrator | 2026-01-01 01:43:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:44.876534 | orchestrator | 2026-01-01 01:43:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:44.876766 | orchestrator | 2026-01-01 01:43:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:47.923174 | orchestrator | 2026-01-01 01:43:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:47.925209 | orchestrator | 2026-01-01 01:43:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:47.925266 | orchestrator | 2026-01-01 01:43:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:50.977601 | orchestrator | 2026-01-01 01:43:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:50.979690 | orchestrator | 2026-01-01 01:43:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:50.979888 | orchestrator | 2026-01-01 01:43:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:54.032438 | orchestrator | 2026-01-01 01:43:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:54.034289 | orchestrator | 2026-01-01 01:43:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:54.034323 | orchestrator | 2026-01-01 01:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:43:57.082738 | orchestrator | 2026-01-01 01:43:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:43:57.086370 | orchestrator | 2026-01-01 01:43:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:43:57.086582 | orchestrator | 2026-01-01 01:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:00.132379 | orchestrator | 2026-01-01 01:44:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:00.133882 | orchestrator | 2026-01-01 01:44:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:00.133937 | orchestrator | 2026-01-01 01:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:03.180538 | orchestrator | 2026-01-01 01:44:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:03.182468 | orchestrator | 2026-01-01 01:44:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:03.182520 | orchestrator | 2026-01-01 01:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:06.235750 | orchestrator | 2026-01-01 01:44:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:06.236700 | orchestrator | 2026-01-01 01:44:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:06.236757 | orchestrator | 2026-01-01 01:44:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:09.277930 | orchestrator | 2026-01-01 01:44:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:09.279984 | orchestrator | 2026-01-01 01:44:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:09.280082 | orchestrator | 2026-01-01 01:44:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:12.326684 | orchestrator | 2026-01-01 01:44:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:12.329202 | orchestrator | 2026-01-01 01:44:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:12.329277 | orchestrator | 2026-01-01 01:44:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:15.379666 | orchestrator | 2026-01-01 01:44:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:15.381414 | orchestrator | 2026-01-01 01:44:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:15.381461 | orchestrator | 2026-01-01 01:44:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:18.427879 | orchestrator | 2026-01-01 01:44:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:18.429522 | orchestrator | 2026-01-01 01:44:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:18.429569 | orchestrator | 2026-01-01 01:44:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:21.479469 | orchestrator | 2026-01-01 01:44:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:21.480390 | orchestrator | 2026-01-01 01:44:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:21.480428 | orchestrator | 2026-01-01 01:44:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:24.524832 | orchestrator | 2026-01-01 01:44:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:24.526243 | orchestrator | 2026-01-01 01:44:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:24.526278 | orchestrator | 2026-01-01 01:44:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:27.571577 | orchestrator | 2026-01-01 01:44:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:27.573144 | orchestrator | 2026-01-01 01:44:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:27.573192 | orchestrator | 2026-01-01 01:44:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:30.617467 | orchestrator | 2026-01-01 01:44:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:30.619806 | orchestrator | 2026-01-01 01:44:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:30.619900 | orchestrator | 2026-01-01 01:44:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:33.669585 | orchestrator | 2026-01-01 01:44:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:33.671757 | orchestrator | 2026-01-01 01:44:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:33.672153 | orchestrator | 2026-01-01 01:44:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:36.720718 | orchestrator | 2026-01-01 01:44:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:36.724421 | orchestrator | 2026-01-01 01:44:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:36.724475 | orchestrator | 2026-01-01 01:44:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:39.782724 | orchestrator | 2026-01-01 01:44:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:39.784955 | orchestrator | 2026-01-01 01:44:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:39.785011 | orchestrator | 2026-01-01 01:44:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:42.839338 | orchestrator | 2026-01-01 01:44:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:42.841234 | orchestrator | 2026-01-01 01:44:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:42.841272 | orchestrator | 2026-01-01 01:44:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:45.888361 | orchestrator | 2026-01-01 01:44:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:45.890523 | orchestrator | 2026-01-01 01:44:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:45.890649 | orchestrator | 2026-01-01 01:44:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:48.940354 | orchestrator | 2026-01-01 01:44:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:48.943169 | orchestrator | 2026-01-01 01:44:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:48.943433 | orchestrator | 2026-01-01 01:44:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:51.990896 | orchestrator | 2026-01-01 01:44:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:51.992298 | orchestrator | 2026-01-01 01:44:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:51.992373 | orchestrator | 2026-01-01 01:44:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:55.053386 | orchestrator | 2026-01-01 01:44:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:55.055158 | orchestrator | 2026-01-01 01:44:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:55.055318 | orchestrator | 2026-01-01 01:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:44:58.112330 | orchestrator | 2026-01-01 01:44:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:44:58.115569 | orchestrator | 2026-01-01 01:44:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:44:58.115633 | orchestrator | 2026-01-01 01:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:01.168418 | orchestrator | 2026-01-01 01:45:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:01.169440 | orchestrator | 2026-01-01 01:45:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:01.169463 | orchestrator | 2026-01-01 01:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:04.219755 | orchestrator | 2026-01-01 01:45:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:04.221974 | orchestrator | 2026-01-01 01:45:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:04.222152 | orchestrator | 2026-01-01 01:45:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:07.272239 | orchestrator | 2026-01-01 01:45:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:07.273829 | orchestrator | 2026-01-01 01:45:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:07.273899 | orchestrator | 2026-01-01 01:45:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:10.328125 | orchestrator | 2026-01-01 01:45:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:10.330433 | orchestrator | 2026-01-01 01:45:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:10.330585 | orchestrator | 2026-01-01 01:45:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:13.381410 | orchestrator | 2026-01-01 01:45:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:13.382473 | orchestrator | 2026-01-01 01:45:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:13.382518 | orchestrator | 2026-01-01 01:45:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:16.424866 | orchestrator | 2026-01-01 01:45:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:16.425867 | orchestrator | 2026-01-01 01:45:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:16.425991 | orchestrator | 2026-01-01 01:45:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:19.470894 | orchestrator | 2026-01-01 01:45:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:19.471838 | orchestrator | 2026-01-01 01:45:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:19.471874 | orchestrator | 2026-01-01 01:45:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:22.520958 | orchestrator | 2026-01-01 01:45:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:22.523238 | orchestrator | 2026-01-01 01:45:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:22.523351 | orchestrator | 2026-01-01 01:45:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:25.571585 | orchestrator | 2026-01-01 01:45:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:25.572650 | orchestrator | 2026-01-01 01:45:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:25.572681 | orchestrator | 2026-01-01 01:45:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:28.622768 | orchestrator | 2026-01-01 01:45:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:28.624849 | orchestrator | 2026-01-01 01:45:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:28.624980 | orchestrator | 2026-01-01 01:45:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:31.673744 | orchestrator | 2026-01-01 01:45:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:31.676539 | orchestrator | 2026-01-01 01:45:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:31.676664 | orchestrator | 2026-01-01 01:45:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:34.723322 | orchestrator | 2026-01-01 01:45:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:34.724827 | orchestrator | 2026-01-01 01:45:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:34.724874 | orchestrator | 2026-01-01 01:45:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:37.768872 | orchestrator | 2026-01-01 01:45:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:37.770938 | orchestrator | 2026-01-01 01:45:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:37.771088 | orchestrator | 2026-01-01 01:45:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:40.819600 | orchestrator | 2026-01-01 01:45:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:40.822686 | orchestrator | 2026-01-01 01:45:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:40.823056 | orchestrator | 2026-01-01 01:45:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:43.871800 | orchestrator | 2026-01-01 01:45:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:43.873363 | orchestrator | 2026-01-01 01:45:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:43.873532 | orchestrator | 2026-01-01 01:45:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:46.916708 | orchestrator | 2026-01-01 01:45:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:46.917732 | orchestrator | 2026-01-01 01:45:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:46.917909 | orchestrator | 2026-01-01 01:45:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:49.972674 | orchestrator | 2026-01-01 01:45:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:49.976377 | orchestrator | 2026-01-01 01:45:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:49.976431 | orchestrator | 2026-01-01 01:45:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:53.030619 | orchestrator | 2026-01-01 01:45:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:53.031920 | orchestrator | 2026-01-01 01:45:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:53.032727 | orchestrator | 2026-01-01 01:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:56.083716 | orchestrator | 2026-01-01 01:45:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:56.084563 | orchestrator | 2026-01-01 01:45:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:56.084601 | orchestrator | 2026-01-01 01:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:45:59.129559 | orchestrator | 2026-01-01 01:45:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:45:59.132154 | orchestrator | 2026-01-01 01:45:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:45:59.132362 | orchestrator | 2026-01-01 01:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:02.183424 | orchestrator | 2026-01-01 01:46:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:02.186698 | orchestrator | 2026-01-01 01:46:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:02.186785 | orchestrator | 2026-01-01 01:46:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:05.247442 | orchestrator | 2026-01-01 01:46:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:05.249880 | orchestrator | 2026-01-01 01:46:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:05.249943 | orchestrator | 2026-01-01 01:46:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:08.287327 | orchestrator | 2026-01-01 01:46:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:08.289327 | orchestrator | 2026-01-01 01:46:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:08.289376 | orchestrator | 2026-01-01 01:46:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:11.340378 | orchestrator | 2026-01-01 01:46:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:11.342304 | orchestrator | 2026-01-01 01:46:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:11.342445 | orchestrator | 2026-01-01 01:46:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:14.391618 | orchestrator | 2026-01-01 01:46:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:14.393704 | orchestrator | 2026-01-01 01:46:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:14.393905 | orchestrator | 2026-01-01 01:46:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:17.434709 | orchestrator | 2026-01-01 01:46:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:17.436399 | orchestrator | 2026-01-01 01:46:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:17.436776 | orchestrator | 2026-01-01 01:46:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:20.487759 | orchestrator | 2026-01-01 01:46:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:20.489601 | orchestrator | 2026-01-01 01:46:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:20.489727 | orchestrator | 2026-01-01 01:46:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:23.542248 | orchestrator | 2026-01-01 01:46:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:23.544617 | orchestrator | 2026-01-01 01:46:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:23.545106 | orchestrator | 2026-01-01 01:46:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:26.595629 | orchestrator | 2026-01-01 01:46:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:26.597478 | orchestrator | 2026-01-01 01:46:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:26.597496 | orchestrator | 2026-01-01 01:46:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:29.648980 | orchestrator | 2026-01-01 01:46:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:29.651268 | orchestrator | 2026-01-01 01:46:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:29.651349 | orchestrator | 2026-01-01 01:46:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:32.697541 | orchestrator | 2026-01-01 01:46:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:32.702382 | orchestrator | 2026-01-01 01:46:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:32.702441 | orchestrator | 2026-01-01 01:46:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:35.748956 | orchestrator | 2026-01-01 01:46:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:35.750505 | orchestrator | 2026-01-01 01:46:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:35.750736 | orchestrator | 2026-01-01 01:46:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:38.800930 | orchestrator | 2026-01-01 01:46:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:38.802579 | orchestrator | 2026-01-01 01:46:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:38.802645 | orchestrator | 2026-01-01 01:46:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:41.856839 | orchestrator | 2026-01-01 01:46:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:41.857773 | orchestrator | 2026-01-01 01:46:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:41.857806 | orchestrator | 2026-01-01 01:46:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:44.907114 | orchestrator | 2026-01-01 01:46:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:44.908867 | orchestrator | 2026-01-01 01:46:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:44.908930 | orchestrator | 2026-01-01 01:46:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:47.959388 | orchestrator | 2026-01-01 01:46:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:47.960693 | orchestrator | 2026-01-01 01:46:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:47.960726 | orchestrator | 2026-01-01 01:46:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:51.015175 | orchestrator | 2026-01-01 01:46:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:51.017199 | orchestrator | 2026-01-01 01:46:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:51.017359 | orchestrator | 2026-01-01 01:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:54.066647 | orchestrator | 2026-01-01 01:46:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:54.069130 | orchestrator | 2026-01-01 01:46:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:54.069174 | orchestrator | 2026-01-01 01:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:46:57.122268 | orchestrator | 2026-01-01 01:46:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:46:57.123800 | orchestrator | 2026-01-01 01:46:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:46:57.123848 | orchestrator | 2026-01-01 01:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:00.171233 | orchestrator | 2026-01-01 01:47:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:00.173287 | orchestrator | 2026-01-01 01:47:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:00.173367 | orchestrator | 2026-01-01 01:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:03.225491 | orchestrator | 2026-01-01 01:47:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:03.227919 | orchestrator | 2026-01-01 01:47:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:03.227963 | orchestrator | 2026-01-01 01:47:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:06.268431 | orchestrator | 2026-01-01 01:47:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:06.270274 | orchestrator | 2026-01-01 01:47:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:06.270308 | orchestrator | 2026-01-01 01:47:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:09.316676 | orchestrator | 2026-01-01 01:47:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:09.318349 | orchestrator | 2026-01-01 01:47:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:09.318566 | orchestrator | 2026-01-01 01:47:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:12.370830 | orchestrator | 2026-01-01 01:47:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:12.372598 | orchestrator | 2026-01-01 01:47:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:12.372838 | orchestrator | 2026-01-01 01:47:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:15.434939 | orchestrator | 2026-01-01 01:47:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:15.437612 | orchestrator | 2026-01-01 01:47:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:15.437652 | orchestrator | 2026-01-01 01:47:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:18.486105 | orchestrator | 2026-01-01 01:47:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:18.487327 | orchestrator | 2026-01-01 01:47:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:18.487364 | orchestrator | 2026-01-01 01:47:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:21.539970 | orchestrator | 2026-01-01 01:47:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:21.541603 | orchestrator | 2026-01-01 01:47:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:21.541637 | orchestrator | 2026-01-01 01:47:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:24.594172 | orchestrator | 2026-01-01 01:47:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:24.596063 | orchestrator | 2026-01-01 01:47:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:24.596114 | orchestrator | 2026-01-01 01:47:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:27.641068 | orchestrator | 2026-01-01 01:47:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:27.642465 | orchestrator | 2026-01-01 01:47:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:27.642562 | orchestrator | 2026-01-01 01:47:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:30.692966 | orchestrator | 2026-01-01 01:47:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:30.695879 | orchestrator | 2026-01-01 01:47:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:30.695963 | orchestrator | 2026-01-01 01:47:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:33.746672 | orchestrator | 2026-01-01 01:47:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:33.748192 | orchestrator | 2026-01-01 01:47:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:33.748207 | orchestrator | 2026-01-01 01:47:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:36.799237 | orchestrator | 2026-01-01 01:47:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:36.801644 | orchestrator | 2026-01-01 01:47:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:36.801784 | orchestrator | 2026-01-01 01:47:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:39.856242 | orchestrator | 2026-01-01 01:47:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:39.857493 | orchestrator | 2026-01-01 01:47:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:39.857653 | orchestrator | 2026-01-01 01:47:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:42.908278 | orchestrator | 2026-01-01 01:47:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:42.909829 | orchestrator | 2026-01-01 01:47:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:42.909865 | orchestrator | 2026-01-01 01:47:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:45.960786 | orchestrator | 2026-01-01 01:47:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:45.961857 | orchestrator | 2026-01-01 01:47:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:45.962090 | orchestrator | 2026-01-01 01:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:49.017557 | orchestrator | 2026-01-01 01:47:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:49.021805 | orchestrator | 2026-01-01 01:47:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:49.021917 | orchestrator | 2026-01-01 01:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:52.067774 | orchestrator | 2026-01-01 01:47:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:52.069377 | orchestrator | 2026-01-01 01:47:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:52.069455 | orchestrator | 2026-01-01 01:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:55.117148 | orchestrator | 2026-01-01 01:47:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:55.119169 | orchestrator | 2026-01-01 01:47:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:55.119206 | orchestrator | 2026-01-01 01:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:47:58.163844 | orchestrator | 2026-01-01 01:47:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:47:58.165113 | orchestrator | 2026-01-01 01:47:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:47:58.165166 | orchestrator | 2026-01-01 01:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:01.210736 | orchestrator | 2026-01-01 01:48:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:01.211230 | orchestrator | 2026-01-01 01:48:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:01.211279 | orchestrator | 2026-01-01 01:48:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:04.252849 | orchestrator | 2026-01-01 01:48:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:04.254425 | orchestrator | 2026-01-01 01:48:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:04.254443 | orchestrator | 2026-01-01 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:07.304403 | orchestrator | 2026-01-01 01:48:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:07.306223 | orchestrator | 2026-01-01 01:48:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:07.306402 | orchestrator | 2026-01-01 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:10.353640 | orchestrator | 2026-01-01 01:48:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:10.355882 | orchestrator | 2026-01-01 01:48:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:10.355945 | orchestrator | 2026-01-01 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:13.404525 | orchestrator | 2026-01-01 01:48:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:13.406112 | orchestrator | 2026-01-01 01:48:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:13.406184 | orchestrator | 2026-01-01 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:16.458059 | orchestrator | 2026-01-01 01:48:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:16.459190 | orchestrator | 2026-01-01 01:48:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:16.459225 | orchestrator | 2026-01-01 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:19.505598 | orchestrator | 2026-01-01 01:48:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:19.507780 | orchestrator | 2026-01-01 01:48:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:19.508213 | orchestrator | 2026-01-01 01:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:22.555984 | orchestrator | 2026-01-01 01:48:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:22.559383 | orchestrator | 2026-01-01 01:48:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:22.559763 | orchestrator | 2026-01-01 01:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:25.609390 | orchestrator | 2026-01-01 01:48:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:25.610884 | orchestrator | 2026-01-01 01:48:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:25.610943 | orchestrator | 2026-01-01 01:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:28.659544 | orchestrator | 2026-01-01 01:48:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:28.662124 | orchestrator | 2026-01-01 01:48:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:28.662319 | orchestrator | 2026-01-01 01:48:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:31.703870 | orchestrator | 2026-01-01 01:48:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:31.705696 | orchestrator | 2026-01-01 01:48:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:31.706188 | orchestrator | 2026-01-01 01:48:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:34.760286 | orchestrator | 2026-01-01 01:48:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:34.763499 | orchestrator | 2026-01-01 01:48:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:34.763561 | orchestrator | 2026-01-01 01:48:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:37.812366 | orchestrator | 2026-01-01 01:48:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:37.813152 | orchestrator | 2026-01-01 01:48:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:37.813176 | orchestrator | 2026-01-01 01:48:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:40.861843 | orchestrator | 2026-01-01 01:48:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:40.862410 | orchestrator | 2026-01-01 01:48:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:40.862541 | orchestrator | 2026-01-01 01:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:43.916289 | orchestrator | 2026-01-01 01:48:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:43.919049 | orchestrator | 2026-01-01 01:48:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:43.919160 | orchestrator | 2026-01-01 01:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:46.967576 | orchestrator | 2026-01-01 01:48:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:46.969915 | orchestrator | 2026-01-01 01:48:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:46.970068 | orchestrator | 2026-01-01 01:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:50.020756 | orchestrator | 2026-01-01 01:48:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:50.022336 | orchestrator | 2026-01-01 01:48:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:50.022384 | orchestrator | 2026-01-01 01:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:53.064211 | orchestrator | 2026-01-01 01:48:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:53.064303 | orchestrator | 2026-01-01 01:48:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:53.064319 | orchestrator | 2026-01-01 01:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:56.109741 | orchestrator | 2026-01-01 01:48:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:56.110407 | orchestrator | 2026-01-01 01:48:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:56.110862 | orchestrator | 2026-01-01 01:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:48:59.157364 | orchestrator | 2026-01-01 01:48:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:48:59.159659 | orchestrator | 2026-01-01 01:48:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:48:59.159734 | orchestrator | 2026-01-01 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:02.205770 | orchestrator | 2026-01-01 01:49:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:02.208710 | orchestrator | 2026-01-01 01:49:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:02.208762 | orchestrator | 2026-01-01 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:05.264230 | orchestrator | 2026-01-01 01:49:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:05.265322 | orchestrator | 2026-01-01 01:49:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:05.265525 | orchestrator | 2026-01-01 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:08.311157 | orchestrator | 2026-01-01 01:49:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:08.312743 | orchestrator | 2026-01-01 01:49:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:08.312783 | orchestrator | 2026-01-01 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:11.367258 | orchestrator | 2026-01-01 01:49:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:11.367797 | orchestrator | 2026-01-01 01:49:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:11.367901 | orchestrator | 2026-01-01 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:14.417724 | orchestrator | 2026-01-01 01:49:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:14.417822 | orchestrator | 2026-01-01 01:49:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:14.417837 | orchestrator | 2026-01-01 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:17.459564 | orchestrator | 2026-01-01 01:49:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:17.461603 | orchestrator | 2026-01-01 01:49:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:17.461629 | orchestrator | 2026-01-01 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:20.509701 | orchestrator | 2026-01-01 01:49:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:20.511416 | orchestrator | 2026-01-01 01:49:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:20.511449 | orchestrator | 2026-01-01 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:23.554500 | orchestrator | 2026-01-01 01:49:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:23.556326 | orchestrator | 2026-01-01 01:49:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:23.556376 | orchestrator | 2026-01-01 01:49:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:26.602535 | orchestrator | 2026-01-01 01:49:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:26.604476 | orchestrator | 2026-01-01 01:49:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:26.604640 | orchestrator | 2026-01-01 01:49:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:29.653028 | orchestrator | 2026-01-01 01:49:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:29.655153 | orchestrator | 2026-01-01 01:49:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:29.656098 | orchestrator | 2026-01-01 01:49:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:32.706517 | orchestrator | 2026-01-01 01:49:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:32.708194 | orchestrator | 2026-01-01 01:49:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:32.708243 | orchestrator | 2026-01-01 01:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:35.757489 | orchestrator | 2026-01-01 01:49:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:35.758552 | orchestrator | 2026-01-01 01:49:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:35.758625 | orchestrator | 2026-01-01 01:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:38.805637 | orchestrator | 2026-01-01 01:49:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:38.807579 | orchestrator | 2026-01-01 01:49:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:38.807663 | orchestrator | 2026-01-01 01:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:41.864506 | orchestrator | 2026-01-01 01:49:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:41.866669 | orchestrator | 2026-01-01 01:49:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:41.866715 | orchestrator | 2026-01-01 01:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:44.922895 | orchestrator | 2026-01-01 01:49:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:44.925542 | orchestrator | 2026-01-01 01:49:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:44.925611 | orchestrator | 2026-01-01 01:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:47.973295 | orchestrator | 2026-01-01 01:49:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:47.975658 | orchestrator | 2026-01-01 01:49:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:47.975747 | orchestrator | 2026-01-01 01:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:51.030461 | orchestrator | 2026-01-01 01:49:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:51.033123 | orchestrator | 2026-01-01 01:49:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:51.033174 | orchestrator | 2026-01-01 01:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:54.085583 | orchestrator | 2026-01-01 01:49:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:54.087443 | orchestrator | 2026-01-01 01:49:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:54.087649 | orchestrator | 2026-01-01 01:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:49:57.130884 | orchestrator | 2026-01-01 01:49:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:49:57.133797 | orchestrator | 2026-01-01 01:49:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:49:57.133842 | orchestrator | 2026-01-01 01:49:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:00.180944 | orchestrator | 2026-01-01 01:50:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:00.183241 | orchestrator | 2026-01-01 01:50:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:00.183422 | orchestrator | 2026-01-01 01:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:03.224613 | orchestrator | 2026-01-01 01:50:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:03.225406 | orchestrator | 2026-01-01 01:50:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:03.225447 | orchestrator | 2026-01-01 01:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:06.273691 | orchestrator | 2026-01-01 01:50:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:06.276334 | orchestrator | 2026-01-01 01:50:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:06.276393 | orchestrator | 2026-01-01 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:09.324208 | orchestrator | 2026-01-01 01:50:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:09.325789 | orchestrator | 2026-01-01 01:50:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:09.326164 | orchestrator | 2026-01-01 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:12.376702 | orchestrator | 2026-01-01 01:50:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:12.379215 | orchestrator | 2026-01-01 01:50:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:12.379399 | orchestrator | 2026-01-01 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:15.435389 | orchestrator | 2026-01-01 01:50:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:15.436829 | orchestrator | 2026-01-01 01:50:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:15.436874 | orchestrator | 2026-01-01 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:18.485073 | orchestrator | 2026-01-01 01:50:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:18.487343 | orchestrator | 2026-01-01 01:50:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:18.487490 | orchestrator | 2026-01-01 01:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:21.536696 | orchestrator | 2026-01-01 01:50:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:21.538560 | orchestrator | 2026-01-01 01:50:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:21.538603 | orchestrator | 2026-01-01 01:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:24.589083 | orchestrator | 2026-01-01 01:50:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:24.591081 | orchestrator | 2026-01-01 01:50:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:24.591183 | orchestrator | 2026-01-01 01:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:27.639676 | orchestrator | 2026-01-01 01:50:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:27.641815 | orchestrator | 2026-01-01 01:50:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:27.641864 | orchestrator | 2026-01-01 01:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:30.685827 | orchestrator | 2026-01-01 01:50:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:30.687610 | orchestrator | 2026-01-01 01:50:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:30.687651 | orchestrator | 2026-01-01 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:33.746730 | orchestrator | 2026-01-01 01:50:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:33.748179 | orchestrator | 2026-01-01 01:50:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:33.748223 | orchestrator | 2026-01-01 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:36.797393 | orchestrator | 2026-01-01 01:50:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:36.799319 | orchestrator | 2026-01-01 01:50:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:36.799367 | orchestrator | 2026-01-01 01:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:39.855855 | orchestrator | 2026-01-01 01:50:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:39.858676 | orchestrator | 2026-01-01 01:50:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:39.858751 | orchestrator | 2026-01-01 01:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:42.902956 | orchestrator | 2026-01-01 01:50:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:42.905763 | orchestrator | 2026-01-01 01:50:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:42.905816 | orchestrator | 2026-01-01 01:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:45.949759 | orchestrator | 2026-01-01 01:50:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:45.954351 | orchestrator | 2026-01-01 01:50:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:45.954440 | orchestrator | 2026-01-01 01:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:49.003385 | orchestrator | 2026-01-01 01:50:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:49.005439 | orchestrator | 2026-01-01 01:50:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:49.005485 | orchestrator | 2026-01-01 01:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:52.070240 | orchestrator | 2026-01-01 01:50:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:52.072418 | orchestrator | 2026-01-01 01:50:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:52.072463 | orchestrator | 2026-01-01 01:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:55.116794 | orchestrator | 2026-01-01 01:50:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:55.118835 | orchestrator | 2026-01-01 01:50:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:55.118886 | orchestrator | 2026-01-01 01:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:50:58.172335 | orchestrator | 2026-01-01 01:50:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:50:58.173615 | orchestrator | 2026-01-01 01:50:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:50:58.173694 | orchestrator | 2026-01-01 01:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:01.219226 | orchestrator | 2026-01-01 01:51:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:01.220097 | orchestrator | 2026-01-01 01:51:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:01.220341 | orchestrator | 2026-01-01 01:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:04.269358 | orchestrator | 2026-01-01 01:51:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:04.271662 | orchestrator | 2026-01-01 01:51:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:04.271717 | orchestrator | 2026-01-01 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:07.325057 | orchestrator | 2026-01-01 01:51:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:07.327576 | orchestrator | 2026-01-01 01:51:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:07.327637 | orchestrator | 2026-01-01 01:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:10.379811 | orchestrator | 2026-01-01 01:51:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:10.381769 | orchestrator | 2026-01-01 01:51:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:10.381816 | orchestrator | 2026-01-01 01:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:13.431378 | orchestrator | 2026-01-01 01:51:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:13.432985 | orchestrator | 2026-01-01 01:51:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:13.433054 | orchestrator | 2026-01-01 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:16.480089 | orchestrator | 2026-01-01 01:51:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:16.481844 | orchestrator | 2026-01-01 01:51:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:16.482310 | orchestrator | 2026-01-01 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:19.528301 | orchestrator | 2026-01-01 01:51:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:19.530124 | orchestrator | 2026-01-01 01:51:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:19.530225 | orchestrator | 2026-01-01 01:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:22.575213 | orchestrator | 2026-01-01 01:51:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:22.576472 | orchestrator | 2026-01-01 01:51:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:22.576505 | orchestrator | 2026-01-01 01:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:25.623525 | orchestrator | 2026-01-01 01:51:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:25.625734 | orchestrator | 2026-01-01 01:51:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:25.625814 | orchestrator | 2026-01-01 01:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:28.677223 | orchestrator | 2026-01-01 01:51:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:28.678853 | orchestrator | 2026-01-01 01:51:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:28.678955 | orchestrator | 2026-01-01 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:31.727837 | orchestrator | 2026-01-01 01:51:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:31.729309 | orchestrator | 2026-01-01 01:51:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:31.729354 | orchestrator | 2026-01-01 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:34.778729 | orchestrator | 2026-01-01 01:51:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:34.780501 | orchestrator | 2026-01-01 01:51:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:34.780657 | orchestrator | 2026-01-01 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:37.829775 | orchestrator | 2026-01-01 01:51:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:37.832184 | orchestrator | 2026-01-01 01:51:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:37.832233 | orchestrator | 2026-01-01 01:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:40.882947 | orchestrator | 2026-01-01 01:51:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:40.884645 | orchestrator | 2026-01-01 01:51:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:40.884726 | orchestrator | 2026-01-01 01:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:43.934578 | orchestrator | 2026-01-01 01:51:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:43.937289 | orchestrator | 2026-01-01 01:51:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:43.937329 | orchestrator | 2026-01-01 01:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:46.984955 | orchestrator | 2026-01-01 01:51:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:46.986467 | orchestrator | 2026-01-01 01:51:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:46.986496 | orchestrator | 2026-01-01 01:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:50.034784 | orchestrator | 2026-01-01 01:51:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:50.036436 | orchestrator | 2026-01-01 01:51:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:50.037744 | orchestrator | 2026-01-01 01:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:53.076676 | orchestrator | 2026-01-01 01:51:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:53.077572 | orchestrator | 2026-01-01 01:51:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:53.077592 | orchestrator | 2026-01-01 01:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:56.123894 | orchestrator | 2026-01-01 01:51:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:56.125299 | orchestrator | 2026-01-01 01:51:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:56.125407 | orchestrator | 2026-01-01 01:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:51:59.182858 | orchestrator | 2026-01-01 01:51:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:51:59.185846 | orchestrator | 2026-01-01 01:51:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:51:59.186096 | orchestrator | 2026-01-01 01:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:02.238861 | orchestrator | 2026-01-01 01:52:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:02.240119 | orchestrator | 2026-01-01 01:52:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:02.240461 | orchestrator | 2026-01-01 01:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:05.281922 | orchestrator | 2026-01-01 01:52:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:05.282994 | orchestrator | 2026-01-01 01:52:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:05.283077 | orchestrator | 2026-01-01 01:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:08.335350 | orchestrator | 2026-01-01 01:52:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:08.337821 | orchestrator | 2026-01-01 01:52:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:08.337913 | orchestrator | 2026-01-01 01:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:11.385338 | orchestrator | 2026-01-01 01:52:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:11.387248 | orchestrator | 2026-01-01 01:52:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:11.387308 | orchestrator | 2026-01-01 01:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:14.439807 | orchestrator | 2026-01-01 01:52:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:14.440820 | orchestrator | 2026-01-01 01:52:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:14.440930 | orchestrator | 2026-01-01 01:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:17.491036 | orchestrator | 2026-01-01 01:52:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:17.492267 | orchestrator | 2026-01-01 01:52:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:17.492330 | orchestrator | 2026-01-01 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:20.542161 | orchestrator | 2026-01-01 01:52:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:20.543448 | orchestrator | 2026-01-01 01:52:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:20.543575 | orchestrator | 2026-01-01 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:23.592553 | orchestrator | 2026-01-01 01:52:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:23.593209 | orchestrator | 2026-01-01 01:52:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:23.593244 | orchestrator | 2026-01-01 01:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:26.641822 | orchestrator | 2026-01-01 01:52:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:26.643500 | orchestrator | 2026-01-01 01:52:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:26.643677 | orchestrator | 2026-01-01 01:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:29.688455 | orchestrator | 2026-01-01 01:52:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:29.690099 | orchestrator | 2026-01-01 01:52:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:29.690130 | orchestrator | 2026-01-01 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:32.741465 | orchestrator | 2026-01-01 01:52:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:32.744685 | orchestrator | 2026-01-01 01:52:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:32.744766 | orchestrator | 2026-01-01 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:35.796544 | orchestrator | 2026-01-01 01:52:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:35.798746 | orchestrator | 2026-01-01 01:52:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:35.798902 | orchestrator | 2026-01-01 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:38.847793 | orchestrator | 2026-01-01 01:52:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:38.850401 | orchestrator | 2026-01-01 01:52:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:38.850827 | orchestrator | 2026-01-01 01:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:41.898699 | orchestrator | 2026-01-01 01:52:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:41.901311 | orchestrator | 2026-01-01 01:52:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:41.901357 | orchestrator | 2026-01-01 01:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:44.945663 | orchestrator | 2026-01-01 01:52:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:44.946766 | orchestrator | 2026-01-01 01:52:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:44.946811 | orchestrator | 2026-01-01 01:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:47.999687 | orchestrator | 2026-01-01 01:52:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:48.001041 | orchestrator | 2026-01-01 01:52:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:48.001077 | orchestrator | 2026-01-01 01:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:51.050361 | orchestrator | 2026-01-01 01:52:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:51.050722 | orchestrator | 2026-01-01 01:52:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:51.050742 | orchestrator | 2026-01-01 01:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:54.100275 | orchestrator | 2026-01-01 01:52:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:54.102797 | orchestrator | 2026-01-01 01:52:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:54.102975 | orchestrator | 2026-01-01 01:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:52:57.156407 | orchestrator | 2026-01-01 01:52:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:52:57.158138 | orchestrator | 2026-01-01 01:52:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:52:57.158177 | orchestrator | 2026-01-01 01:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:00.214389 | orchestrator | 2026-01-01 01:53:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:00.217046 | orchestrator | 2026-01-01 01:53:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:00.217103 | orchestrator | 2026-01-01 01:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:03.260085 | orchestrator | 2026-01-01 01:53:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:03.262070 | orchestrator | 2026-01-01 01:53:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:03.262106 | orchestrator | 2026-01-01 01:53:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:06.304957 | orchestrator | 2026-01-01 01:53:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:06.306184 | orchestrator | 2026-01-01 01:53:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:06.306332 | orchestrator | 2026-01-01 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:09.358944 | orchestrator | 2026-01-01 01:53:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:09.360061 | orchestrator | 2026-01-01 01:53:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:09.360094 | orchestrator | 2026-01-01 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:12.412962 | orchestrator | 2026-01-01 01:53:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:12.415712 | orchestrator | 2026-01-01 01:53:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:12.415874 | orchestrator | 2026-01-01 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:15.470102 | orchestrator | 2026-01-01 01:53:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:15.471548 | orchestrator | 2026-01-01 01:53:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:15.471576 | orchestrator | 2026-01-01 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:18.524619 | orchestrator | 2026-01-01 01:53:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:18.525653 | orchestrator | 2026-01-01 01:53:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:18.525701 | orchestrator | 2026-01-01 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:21.572000 | orchestrator | 2026-01-01 01:53:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:21.574141 | orchestrator | 2026-01-01 01:53:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:21.574496 | orchestrator | 2026-01-01 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:24.623555 | orchestrator | 2026-01-01 01:53:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:24.624765 | orchestrator | 2026-01-01 01:53:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:24.624813 | orchestrator | 2026-01-01 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:27.670739 | orchestrator | 2026-01-01 01:53:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:27.672568 | orchestrator | 2026-01-01 01:53:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:27.672594 | orchestrator | 2026-01-01 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:30.719963 | orchestrator | 2026-01-01 01:53:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:30.722253 | orchestrator | 2026-01-01 01:53:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:30.722279 | orchestrator | 2026-01-01 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:33.766583 | orchestrator | 2026-01-01 01:53:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:33.767214 | orchestrator | 2026-01-01 01:53:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:33.767256 | orchestrator | 2026-01-01 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:36.816908 | orchestrator | 2026-01-01 01:53:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:36.819322 | orchestrator | 2026-01-01 01:53:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:36.819398 | orchestrator | 2026-01-01 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:39.871129 | orchestrator | 2026-01-01 01:53:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:39.873047 | orchestrator | 2026-01-01 01:53:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:39.873708 | orchestrator | 2026-01-01 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:42.920714 | orchestrator | 2026-01-01 01:53:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:42.923381 | orchestrator | 2026-01-01 01:53:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:42.923431 | orchestrator | 2026-01-01 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:45.979316 | orchestrator | 2026-01-01 01:53:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:45.980776 | orchestrator | 2026-01-01 01:53:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:45.980888 | orchestrator | 2026-01-01 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:49.032437 | orchestrator | 2026-01-01 01:53:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:49.036482 | orchestrator | 2026-01-01 01:53:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:49.036544 | orchestrator | 2026-01-01 01:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:52.077471 | orchestrator | 2026-01-01 01:53:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:52.078070 | orchestrator | 2026-01-01 01:53:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:52.078092 | orchestrator | 2026-01-01 01:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:55.112716 | orchestrator | 2026-01-01 01:53:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:55.115205 | orchestrator | 2026-01-01 01:53:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:55.115290 | orchestrator | 2026-01-01 01:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:53:58.166645 | orchestrator | 2026-01-01 01:53:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:53:58.167806 | orchestrator | 2026-01-01 01:53:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:53:58.167944 | orchestrator | 2026-01-01 01:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:01.226471 | orchestrator | 2026-01-01 01:54:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:01.228118 | orchestrator | 2026-01-01 01:54:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:01.228153 | orchestrator | 2026-01-01 01:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:04.268551 | orchestrator | 2026-01-01 01:54:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:04.270110 | orchestrator | 2026-01-01 01:54:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:04.270157 | orchestrator | 2026-01-01 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:07.322499 | orchestrator | 2026-01-01 01:54:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:07.323793 | orchestrator | 2026-01-01 01:54:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:07.323879 | orchestrator | 2026-01-01 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:10.369824 | orchestrator | 2026-01-01 01:54:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:10.371196 | orchestrator | 2026-01-01 01:54:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:10.371284 | orchestrator | 2026-01-01 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:13.422220 | orchestrator | 2026-01-01 01:54:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:13.423430 | orchestrator | 2026-01-01 01:54:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:13.423470 | orchestrator | 2026-01-01 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:16.473098 | orchestrator | 2026-01-01 01:54:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:16.474220 | orchestrator | 2026-01-01 01:54:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:16.474255 | orchestrator | 2026-01-01 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:19.521131 | orchestrator | 2026-01-01 01:54:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:19.522624 | orchestrator | 2026-01-01 01:54:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:19.522677 | orchestrator | 2026-01-01 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:22.568090 | orchestrator | 2026-01-01 01:54:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:22.569927 | orchestrator | 2026-01-01 01:54:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:22.569960 | orchestrator | 2026-01-01 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:25.630286 | orchestrator | 2026-01-01 01:54:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:25.632341 | orchestrator | 2026-01-01 01:54:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:25.632385 | orchestrator | 2026-01-01 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:28.682475 | orchestrator | 2026-01-01 01:54:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:28.684474 | orchestrator | 2026-01-01 01:54:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:28.684659 | orchestrator | 2026-01-01 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:31.736602 | orchestrator | 2026-01-01 01:54:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:31.737321 | orchestrator | 2026-01-01 01:54:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:31.737419 | orchestrator | 2026-01-01 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:34.782470 | orchestrator | 2026-01-01 01:54:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:34.783314 | orchestrator | 2026-01-01 01:54:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:34.783477 | orchestrator | 2026-01-01 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:37.825892 | orchestrator | 2026-01-01 01:54:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:37.826874 | orchestrator | 2026-01-01 01:54:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:37.826958 | orchestrator | 2026-01-01 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:40.867489 | orchestrator | 2026-01-01 01:54:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:40.869116 | orchestrator | 2026-01-01 01:54:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:40.869898 | orchestrator | 2026-01-01 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:43.912812 | orchestrator | 2026-01-01 01:54:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:43.918540 | orchestrator | 2026-01-01 01:54:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:43.918596 | orchestrator | 2026-01-01 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:46.965574 | orchestrator | 2026-01-01 01:54:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:46.966527 | orchestrator | 2026-01-01 01:54:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:46.966563 | orchestrator | 2026-01-01 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:50.017546 | orchestrator | 2026-01-01 01:54:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:50.018539 | orchestrator | 2026-01-01 01:54:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:50.018586 | orchestrator | 2026-01-01 01:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:53.058109 | orchestrator | 2026-01-01 01:54:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:53.059337 | orchestrator | 2026-01-01 01:54:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:53.059565 | orchestrator | 2026-01-01 01:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:56.105897 | orchestrator | 2026-01-01 01:54:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:56.106833 | orchestrator | 2026-01-01 01:54:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:56.106860 | orchestrator | 2026-01-01 01:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:54:59.153109 | orchestrator | 2026-01-01 01:54:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:54:59.155911 | orchestrator | 2026-01-01 01:54:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:54:59.155975 | orchestrator | 2026-01-01 01:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:02.204078 | orchestrator | 2026-01-01 01:55:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:02.206149 | orchestrator | 2026-01-01 01:55:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:02.206189 | orchestrator | 2026-01-01 01:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:05.259473 | orchestrator | 2026-01-01 01:55:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:05.261582 | orchestrator | 2026-01-01 01:55:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:05.261635 | orchestrator | 2026-01-01 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:08.307442 | orchestrator | 2026-01-01 01:55:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:08.311542 | orchestrator | 2026-01-01 01:55:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:08.312113 | orchestrator | 2026-01-01 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:11.366468 | orchestrator | 2026-01-01 01:55:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:11.369042 | orchestrator | 2026-01-01 01:55:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:11.369092 | orchestrator | 2026-01-01 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:14.415767 | orchestrator | 2026-01-01 01:55:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:14.417021 | orchestrator | 2026-01-01 01:55:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:14.417321 | orchestrator | 2026-01-01 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:17.472577 | orchestrator | 2026-01-01 01:55:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:17.476492 | orchestrator | 2026-01-01 01:55:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:17.476584 | orchestrator | 2026-01-01 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:20.524460 | orchestrator | 2026-01-01 01:55:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:20.525322 | orchestrator | 2026-01-01 01:55:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:20.525379 | orchestrator | 2026-01-01 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:23.572732 | orchestrator | 2026-01-01 01:55:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:23.574404 | orchestrator | 2026-01-01 01:55:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:23.574471 | orchestrator | 2026-01-01 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:26.621084 | orchestrator | 2026-01-01 01:55:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:26.622853 | orchestrator | 2026-01-01 01:55:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:26.622905 | orchestrator | 2026-01-01 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:29.672123 | orchestrator | 2026-01-01 01:55:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:29.673729 | orchestrator | 2026-01-01 01:55:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:29.673766 | orchestrator | 2026-01-01 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:32.722722 | orchestrator | 2026-01-01 01:55:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:32.724341 | orchestrator | 2026-01-01 01:55:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:32.724378 | orchestrator | 2026-01-01 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:35.769096 | orchestrator | 2026-01-01 01:55:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:35.772209 | orchestrator | 2026-01-01 01:55:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:35.772246 | orchestrator | 2026-01-01 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:38.814699 | orchestrator | 2026-01-01 01:55:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:38.816268 | orchestrator | 2026-01-01 01:55:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:38.816313 | orchestrator | 2026-01-01 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:41.865624 | orchestrator | 2026-01-01 01:55:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:41.868188 | orchestrator | 2026-01-01 01:55:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:41.868225 | orchestrator | 2026-01-01 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:44.913423 | orchestrator | 2026-01-01 01:55:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:44.915197 | orchestrator | 2026-01-01 01:55:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:44.915278 | orchestrator | 2026-01-01 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:47.968777 | orchestrator | 2026-01-01 01:55:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:47.968896 | orchestrator | 2026-01-01 01:55:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:47.968905 | orchestrator | 2026-01-01 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:51.016998 | orchestrator | 2026-01-01 01:55:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:51.018195 | orchestrator | 2026-01-01 01:55:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:51.018250 | orchestrator | 2026-01-01 01:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:54.060542 | orchestrator | 2026-01-01 01:55:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:54.064043 | orchestrator | 2026-01-01 01:55:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:54.064105 | orchestrator | 2026-01-01 01:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:55:57.120980 | orchestrator | 2026-01-01 01:55:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:55:57.122566 | orchestrator | 2026-01-01 01:55:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:55:57.122618 | orchestrator | 2026-01-01 01:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:00.178706 | orchestrator | 2026-01-01 01:56:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:00.181414 | orchestrator | 2026-01-01 01:56:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:00.181569 | orchestrator | 2026-01-01 01:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:03.231773 | orchestrator | 2026-01-01 01:56:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:03.235011 | orchestrator | 2026-01-01 01:56:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:03.235053 | orchestrator | 2026-01-01 01:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:06.286576 | orchestrator | 2026-01-01 01:56:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:06.288377 | orchestrator | 2026-01-01 01:56:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:06.288476 | orchestrator | 2026-01-01 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:09.337615 | orchestrator | 2026-01-01 01:56:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:09.340470 | orchestrator | 2026-01-01 01:56:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:09.340570 | orchestrator | 2026-01-01 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:12.390444 | orchestrator | 2026-01-01 01:56:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:12.391511 | orchestrator | 2026-01-01 01:56:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:12.391804 | orchestrator | 2026-01-01 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:15.442584 | orchestrator | 2026-01-01 01:56:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:15.445758 | orchestrator | 2026-01-01 01:56:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:15.445901 | orchestrator | 2026-01-01 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:18.498630 | orchestrator | 2026-01-01 01:56:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:18.499988 | orchestrator | 2026-01-01 01:56:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:18.500177 | orchestrator | 2026-01-01 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:21.548882 | orchestrator | 2026-01-01 01:56:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:21.549891 | orchestrator | 2026-01-01 01:56:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:21.550128 | orchestrator | 2026-01-01 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:24.593175 | orchestrator | 2026-01-01 01:56:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:24.595069 | orchestrator | 2026-01-01 01:56:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:24.595139 | orchestrator | 2026-01-01 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:27.643200 | orchestrator | 2026-01-01 01:56:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:27.645496 | orchestrator | 2026-01-01 01:56:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:27.645741 | orchestrator | 2026-01-01 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:30.683665 | orchestrator | 2026-01-01 01:56:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:30.686372 | orchestrator | 2026-01-01 01:56:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:30.686411 | orchestrator | 2026-01-01 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:33.737754 | orchestrator | 2026-01-01 01:56:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:33.739315 | orchestrator | 2026-01-01 01:56:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:33.739348 | orchestrator | 2026-01-01 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:36.793719 | orchestrator | 2026-01-01 01:56:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:36.796002 | orchestrator | 2026-01-01 01:56:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:36.796053 | orchestrator | 2026-01-01 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:39.849279 | orchestrator | 2026-01-01 01:56:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:39.853696 | orchestrator | 2026-01-01 01:56:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:39.853863 | orchestrator | 2026-01-01 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:42.901724 | orchestrator | 2026-01-01 01:56:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:42.906086 | orchestrator | 2026-01-01 01:56:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:42.906128 | orchestrator | 2026-01-01 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:45.956430 | orchestrator | 2026-01-01 01:56:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:45.958421 | orchestrator | 2026-01-01 01:56:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:45.958457 | orchestrator | 2026-01-01 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:49.013375 | orchestrator | 2026-01-01 01:56:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:49.016442 | orchestrator | 2026-01-01 01:56:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:49.016606 | orchestrator | 2026-01-01 01:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:52.071891 | orchestrator | 2026-01-01 01:56:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:52.073710 | orchestrator | 2026-01-01 01:56:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:52.073920 | orchestrator | 2026-01-01 01:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:55.120214 | orchestrator | 2026-01-01 01:56:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:55.121353 | orchestrator | 2026-01-01 01:56:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:55.121428 | orchestrator | 2026-01-01 01:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:56:58.173711 | orchestrator | 2026-01-01 01:56:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:56:58.177186 | orchestrator | 2026-01-01 01:56:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:56:58.177657 | orchestrator | 2026-01-01 01:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:01.226082 | orchestrator | 2026-01-01 01:57:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:01.227987 | orchestrator | 2026-01-01 01:57:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:01.228020 | orchestrator | 2026-01-01 01:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:04.280362 | orchestrator | 2026-01-01 01:57:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:04.281735 | orchestrator | 2026-01-01 01:57:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:04.281970 | orchestrator | 2026-01-01 01:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:07.331500 | orchestrator | 2026-01-01 01:57:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:07.333170 | orchestrator | 2026-01-01 01:57:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:07.333442 | orchestrator | 2026-01-01 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:10.378283 | orchestrator | 2026-01-01 01:57:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:10.379990 | orchestrator | 2026-01-01 01:57:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:10.380169 | orchestrator | 2026-01-01 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:13.431973 | orchestrator | 2026-01-01 01:57:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:13.434557 | orchestrator | 2026-01-01 01:57:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:13.434711 | orchestrator | 2026-01-01 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:16.489333 | orchestrator | 2026-01-01 01:57:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:16.490823 | orchestrator | 2026-01-01 01:57:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:16.490855 | orchestrator | 2026-01-01 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:19.541009 | orchestrator | 2026-01-01 01:57:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:19.544563 | orchestrator | 2026-01-01 01:57:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:19.544624 | orchestrator | 2026-01-01 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:22.593577 | orchestrator | 2026-01-01 01:57:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:22.595584 | orchestrator | 2026-01-01 01:57:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:22.596039 | orchestrator | 2026-01-01 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:25.646122 | orchestrator | 2026-01-01 01:57:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:25.647899 | orchestrator | 2026-01-01 01:57:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:25.648032 | orchestrator | 2026-01-01 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:28.703373 | orchestrator | 2026-01-01 01:57:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:28.705061 | orchestrator | 2026-01-01 01:57:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:28.705090 | orchestrator | 2026-01-01 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:31.753280 | orchestrator | 2026-01-01 01:57:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:31.753852 | orchestrator | 2026-01-01 01:57:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:31.753900 | orchestrator | 2026-01-01 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:34.811625 | orchestrator | 2026-01-01 01:57:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:34.813656 | orchestrator | 2026-01-01 01:57:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:34.813706 | orchestrator | 2026-01-01 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:37.863514 | orchestrator | 2026-01-01 01:57:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:37.864805 | orchestrator | 2026-01-01 01:57:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:37.865149 | orchestrator | 2026-01-01 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:40.912471 | orchestrator | 2026-01-01 01:57:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:40.913698 | orchestrator | 2026-01-01 01:57:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:40.914090 | orchestrator | 2026-01-01 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:43.959614 | orchestrator | 2026-01-01 01:57:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:43.961575 | orchestrator | 2026-01-01 01:57:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:43.961611 | orchestrator | 2026-01-01 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:47.007600 | orchestrator | 2026-01-01 01:57:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:47.010340 | orchestrator | 2026-01-01 01:57:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:47.010396 | orchestrator | 2026-01-01 01:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:50.058866 | orchestrator | 2026-01-01 01:57:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:50.059108 | orchestrator | 2026-01-01 01:57:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:50.059137 | orchestrator | 2026-01-01 01:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:53.097705 | orchestrator | 2026-01-01 01:57:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:53.097877 | orchestrator | 2026-01-01 01:57:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:53.097894 | orchestrator | 2026-01-01 01:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:56.141448 | orchestrator | 2026-01-01 01:57:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:56.142109 | orchestrator | 2026-01-01 01:57:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:56.142153 | orchestrator | 2026-01-01 01:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:57:59.188511 | orchestrator | 2026-01-01 01:57:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:57:59.189150 | orchestrator | 2026-01-01 01:57:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:57:59.189477 | orchestrator | 2026-01-01 01:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:02.242326 | orchestrator | 2026-01-01 01:58:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:02.242866 | orchestrator | 2026-01-01 01:58:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:02.243037 | orchestrator | 2026-01-01 01:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:05.290818 | orchestrator | 2026-01-01 01:58:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:05.291794 | orchestrator | 2026-01-01 01:58:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:05.291819 | orchestrator | 2026-01-01 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:08.340141 | orchestrator | 2026-01-01 01:58:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:08.342680 | orchestrator | 2026-01-01 01:58:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:08.342747 | orchestrator | 2026-01-01 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:11.389314 | orchestrator | 2026-01-01 01:58:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:11.391165 | orchestrator | 2026-01-01 01:58:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:11.391233 | orchestrator | 2026-01-01 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:14.439236 | orchestrator | 2026-01-01 01:58:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:14.440148 | orchestrator | 2026-01-01 01:58:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:14.440612 | orchestrator | 2026-01-01 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:17.488817 | orchestrator | 2026-01-01 01:58:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:17.490982 | orchestrator | 2026-01-01 01:58:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:17.491162 | orchestrator | 2026-01-01 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:20.537596 | orchestrator | 2026-01-01 01:58:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:20.539096 | orchestrator | 2026-01-01 01:58:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:20.539213 | orchestrator | 2026-01-01 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:23.589901 | orchestrator | 2026-01-01 01:58:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:23.593167 | orchestrator | 2026-01-01 01:58:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:23.593999 | orchestrator | 2026-01-01 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:26.641445 | orchestrator | 2026-01-01 01:58:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:26.642600 | orchestrator | 2026-01-01 01:58:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:26.642635 | orchestrator | 2026-01-01 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:29.680687 | orchestrator | 2026-01-01 01:58:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:29.682132 | orchestrator | 2026-01-01 01:58:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:29.682169 | orchestrator | 2026-01-01 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:32.728638 | orchestrator | 2026-01-01 01:58:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:32.731271 | orchestrator | 2026-01-01 01:58:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:32.731318 | orchestrator | 2026-01-01 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:35.784373 | orchestrator | 2026-01-01 01:58:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:35.787123 | orchestrator | 2026-01-01 01:58:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:35.787167 | orchestrator | 2026-01-01 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:38.835609 | orchestrator | 2026-01-01 01:58:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:38.837709 | orchestrator | 2026-01-01 01:58:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:38.837819 | orchestrator | 2026-01-01 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:41.886830 | orchestrator | 2026-01-01 01:58:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:41.889204 | orchestrator | 2026-01-01 01:58:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:41.889283 | orchestrator | 2026-01-01 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:44.931999 | orchestrator | 2026-01-01 01:58:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:44.933671 | orchestrator | 2026-01-01 01:58:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:44.933752 | orchestrator | 2026-01-01 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:47.980794 | orchestrator | 2026-01-01 01:58:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:47.982854 | orchestrator | 2026-01-01 01:58:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:47.982931 | orchestrator | 2026-01-01 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:51.038243 | orchestrator | 2026-01-01 01:58:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:51.040789 | orchestrator | 2026-01-01 01:58:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:51.040986 | orchestrator | 2026-01-01 01:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:54.090253 | orchestrator | 2026-01-01 01:58:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:54.092443 | orchestrator | 2026-01-01 01:58:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:54.092795 | orchestrator | 2026-01-01 01:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:58:57.143456 | orchestrator | 2026-01-01 01:58:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:58:57.145375 | orchestrator | 2026-01-01 01:58:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:58:57.145458 | orchestrator | 2026-01-01 01:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:00.195008 | orchestrator | 2026-01-01 01:59:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:00.196667 | orchestrator | 2026-01-01 01:59:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:00.196792 | orchestrator | 2026-01-01 01:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:03.247188 | orchestrator | 2026-01-01 01:59:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:03.248372 | orchestrator | 2026-01-01 01:59:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:03.248445 | orchestrator | 2026-01-01 01:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:06.297046 | orchestrator | 2026-01-01 01:59:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:06.299197 | orchestrator | 2026-01-01 01:59:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:06.299252 | orchestrator | 2026-01-01 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:09.351575 | orchestrator | 2026-01-01 01:59:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:09.353474 | orchestrator | 2026-01-01 01:59:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:09.353531 | orchestrator | 2026-01-01 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:12.399612 | orchestrator | 2026-01-01 01:59:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:12.402997 | orchestrator | 2026-01-01 01:59:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:12.403121 | orchestrator | 2026-01-01 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:15.455297 | orchestrator | 2026-01-01 01:59:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:15.458284 | orchestrator | 2026-01-01 01:59:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:15.458379 | orchestrator | 2026-01-01 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:18.503777 | orchestrator | 2026-01-01 01:59:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:18.505830 | orchestrator | 2026-01-01 01:59:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:18.505863 | orchestrator | 2026-01-01 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:21.553264 | orchestrator | 2026-01-01 01:59:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:21.554986 | orchestrator | 2026-01-01 01:59:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:21.555051 | orchestrator | 2026-01-01 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:24.608312 | orchestrator | 2026-01-01 01:59:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:24.610101 | orchestrator | 2026-01-01 01:59:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:24.610148 | orchestrator | 2026-01-01 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:27.660020 | orchestrator | 2026-01-01 01:59:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:27.662115 | orchestrator | 2026-01-01 01:59:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:27.662177 | orchestrator | 2026-01-01 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:30.708662 | orchestrator | 2026-01-01 01:59:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:30.711394 | orchestrator | 2026-01-01 01:59:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:30.711451 | orchestrator | 2026-01-01 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:33.753341 | orchestrator | 2026-01-01 01:59:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:33.754266 | orchestrator | 2026-01-01 01:59:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:33.754338 | orchestrator | 2026-01-01 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:36.803465 | orchestrator | 2026-01-01 01:59:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:36.805667 | orchestrator | 2026-01-01 01:59:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:36.805823 | orchestrator | 2026-01-01 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:39.859713 | orchestrator | 2026-01-01 01:59:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:39.861399 | orchestrator | 2026-01-01 01:59:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:39.861494 | orchestrator | 2026-01-01 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:42.907177 | orchestrator | 2026-01-01 01:59:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:42.909292 | orchestrator | 2026-01-01 01:59:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:42.909402 | orchestrator | 2026-01-01 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:45.957403 | orchestrator | 2026-01-01 01:59:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:45.959079 | orchestrator | 2026-01-01 01:59:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:45.959111 | orchestrator | 2026-01-01 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:49.011432 | orchestrator | 2026-01-01 01:59:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:49.013429 | orchestrator | 2026-01-01 01:59:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:49.013537 | orchestrator | 2026-01-01 01:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:52.057459 | orchestrator | 2026-01-01 01:59:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:52.059250 | orchestrator | 2026-01-01 01:59:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:52.059320 | orchestrator | 2026-01-01 01:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:55.108398 | orchestrator | 2026-01-01 01:59:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:55.109810 | orchestrator | 2026-01-01 01:59:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:55.109855 | orchestrator | 2026-01-01 01:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 01:59:58.155279 | orchestrator | 2026-01-01 01:59:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 01:59:58.156421 | orchestrator | 2026-01-01 01:59:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 01:59:58.156460 | orchestrator | 2026-01-01 01:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:01.194132 | orchestrator | 2026-01-01 02:00:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:01.195526 | orchestrator | 2026-01-01 02:00:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:01.195576 | orchestrator | 2026-01-01 02:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:04.249597 | orchestrator | 2026-01-01 02:00:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:04.251201 | orchestrator | 2026-01-01 02:00:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:04.251298 | orchestrator | 2026-01-01 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:07.300501 | orchestrator | 2026-01-01 02:00:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:07.301637 | orchestrator | 2026-01-01 02:00:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:07.301840 | orchestrator | 2026-01-01 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:10.353404 | orchestrator | 2026-01-01 02:00:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:10.355308 | orchestrator | 2026-01-01 02:00:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:10.355364 | orchestrator | 2026-01-01 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:13.411973 | orchestrator | 2026-01-01 02:00:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:13.413544 | orchestrator | 2026-01-01 02:00:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:13.413582 | orchestrator | 2026-01-01 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:16.468102 | orchestrator | 2026-01-01 02:00:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:16.470820 | orchestrator | 2026-01-01 02:00:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:16.470874 | orchestrator | 2026-01-01 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:19.520622 | orchestrator | 2026-01-01 02:00:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:19.524012 | orchestrator | 2026-01-01 02:00:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:19.524094 | orchestrator | 2026-01-01 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:22.580542 | orchestrator | 2026-01-01 02:00:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:22.583013 | orchestrator | 2026-01-01 02:00:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:22.583089 | orchestrator | 2026-01-01 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:25.631779 | orchestrator | 2026-01-01 02:00:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:25.634709 | orchestrator | 2026-01-01 02:00:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:25.634930 | orchestrator | 2026-01-01 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:28.688295 | orchestrator | 2026-01-01 02:00:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:28.691864 | orchestrator | 2026-01-01 02:00:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:28.691950 | orchestrator | 2026-01-01 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:31.743022 | orchestrator | 2026-01-01 02:00:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:31.745178 | orchestrator | 2026-01-01 02:00:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:31.745218 | orchestrator | 2026-01-01 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:34.793150 | orchestrator | 2026-01-01 02:00:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:34.794877 | orchestrator | 2026-01-01 02:00:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:34.794915 | orchestrator | 2026-01-01 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:37.843315 | orchestrator | 2026-01-01 02:00:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:37.846243 | orchestrator | 2026-01-01 02:00:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:37.846506 | orchestrator | 2026-01-01 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:40.892414 | orchestrator | 2026-01-01 02:00:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:40.892516 | orchestrator | 2026-01-01 02:00:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:40.892533 | orchestrator | 2026-01-01 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:43.945368 | orchestrator | 2026-01-01 02:00:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:43.948090 | orchestrator | 2026-01-01 02:00:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:43.948168 | orchestrator | 2026-01-01 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:46.996165 | orchestrator | 2026-01-01 02:00:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:46.997532 | orchestrator | 2026-01-01 02:00:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:46.997610 | orchestrator | 2026-01-01 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:50.046320 | orchestrator | 2026-01-01 02:00:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:50.047369 | orchestrator | 2026-01-01 02:00:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:50.047399 | orchestrator | 2026-01-01 02:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:53.088085 | orchestrator | 2026-01-01 02:00:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:53.089639 | orchestrator | 2026-01-01 02:00:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:53.089729 | orchestrator | 2026-01-01 02:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:56.136286 | orchestrator | 2026-01-01 02:00:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:56.137380 | orchestrator | 2026-01-01 02:00:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:56.137438 | orchestrator | 2026-01-01 02:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:00:59.184322 | orchestrator | 2026-01-01 02:00:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:00:59.186567 | orchestrator | 2026-01-01 02:00:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:00:59.186652 | orchestrator | 2026-01-01 02:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:02.239712 | orchestrator | 2026-01-01 02:01:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:02.240941 | orchestrator | 2026-01-01 02:01:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:02.241005 | orchestrator | 2026-01-01 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:05.291950 | orchestrator | 2026-01-01 02:01:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:05.294913 | orchestrator | 2026-01-01 02:01:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:05.295135 | orchestrator | 2026-01-01 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:08.337155 | orchestrator | 2026-01-01 02:01:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:08.338559 | orchestrator | 2026-01-01 02:01:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:08.338619 | orchestrator | 2026-01-01 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:11.387919 | orchestrator | 2026-01-01 02:01:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:11.389323 | orchestrator | 2026-01-01 02:01:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:11.389369 | orchestrator | 2026-01-01 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:14.435456 | orchestrator | 2026-01-01 02:01:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:14.437291 | orchestrator | 2026-01-01 02:01:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:14.437333 | orchestrator | 2026-01-01 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:17.490619 | orchestrator | 2026-01-01 02:01:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:17.490819 | orchestrator | 2026-01-01 02:01:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:17.490840 | orchestrator | 2026-01-01 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:20.543616 | orchestrator | 2026-01-01 02:01:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:20.545497 | orchestrator | 2026-01-01 02:01:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:20.545553 | orchestrator | 2026-01-01 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:23.588856 | orchestrator | 2026-01-01 02:01:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:23.591741 | orchestrator | 2026-01-01 02:01:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:23.591897 | orchestrator | 2026-01-01 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:26.644044 | orchestrator | 2026-01-01 02:01:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:26.646289 | orchestrator | 2026-01-01 02:01:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:26.646392 | orchestrator | 2026-01-01 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:29.695989 | orchestrator | 2026-01-01 02:01:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:29.697523 | orchestrator | 2026-01-01 02:01:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:29.697566 | orchestrator | 2026-01-01 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:32.747972 | orchestrator | 2026-01-01 02:01:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:32.748326 | orchestrator | 2026-01-01 02:01:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:32.748441 | orchestrator | 2026-01-01 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:35.793612 | orchestrator | 2026-01-01 02:01:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:35.795363 | orchestrator | 2026-01-01 02:01:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:35.795395 | orchestrator | 2026-01-01 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:38.850307 | orchestrator | 2026-01-01 02:01:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:38.852551 | orchestrator | 2026-01-01 02:01:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:38.852601 | orchestrator | 2026-01-01 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:41.895129 | orchestrator | 2026-01-01 02:01:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:41.896293 | orchestrator | 2026-01-01 02:01:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:41.896358 | orchestrator | 2026-01-01 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:44.944540 | orchestrator | 2026-01-01 02:01:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:44.946620 | orchestrator | 2026-01-01 02:01:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:44.946688 | orchestrator | 2026-01-01 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:47.993961 | orchestrator | 2026-01-01 02:01:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:47.996650 | orchestrator | 2026-01-01 02:01:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:47.996764 | orchestrator | 2026-01-01 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:51.043280 | orchestrator | 2026-01-01 02:01:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:51.047780 | orchestrator | 2026-01-01 02:01:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:51.047866 | orchestrator | 2026-01-01 02:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:54.096013 | orchestrator | 2026-01-01 02:01:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:54.097862 | orchestrator | 2026-01-01 02:01:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:54.097996 | orchestrator | 2026-01-01 02:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:01:57.148826 | orchestrator | 2026-01-01 02:01:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:01:57.150955 | orchestrator | 2026-01-01 02:01:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:01:57.151098 | orchestrator | 2026-01-01 02:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:00.190456 | orchestrator | 2026-01-01 02:02:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:00.192359 | orchestrator | 2026-01-01 02:02:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:00.192419 | orchestrator | 2026-01-01 02:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:03.243338 | orchestrator | 2026-01-01 02:02:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:03.245328 | orchestrator | 2026-01-01 02:02:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:03.245369 | orchestrator | 2026-01-01 02:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:06.293723 | orchestrator | 2026-01-01 02:02:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:06.295209 | orchestrator | 2026-01-01 02:02:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:06.295300 | orchestrator | 2026-01-01 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:09.344757 | orchestrator | 2026-01-01 02:02:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:09.348079 | orchestrator | 2026-01-01 02:02:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:09.348137 | orchestrator | 2026-01-01 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:12.388542 | orchestrator | 2026-01-01 02:02:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:12.390293 | orchestrator | 2026-01-01 02:02:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:12.390334 | orchestrator | 2026-01-01 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:15.435259 | orchestrator | 2026-01-01 02:02:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:15.437407 | orchestrator | 2026-01-01 02:02:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:15.437441 | orchestrator | 2026-01-01 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:18.483292 | orchestrator | 2026-01-01 02:02:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:18.485606 | orchestrator | 2026-01-01 02:02:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:18.485778 | orchestrator | 2026-01-01 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:21.538812 | orchestrator | 2026-01-01 02:02:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:21.539749 | orchestrator | 2026-01-01 02:02:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:21.539841 | orchestrator | 2026-01-01 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:24.587831 | orchestrator | 2026-01-01 02:02:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:24.590112 | orchestrator | 2026-01-01 02:02:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:24.590300 | orchestrator | 2026-01-01 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:27.640148 | orchestrator | 2026-01-01 02:02:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:27.641887 | orchestrator | 2026-01-01 02:02:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:27.641930 | orchestrator | 2026-01-01 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:30.680017 | orchestrator | 2026-01-01 02:02:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:30.682091 | orchestrator | 2026-01-01 02:02:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:30.682114 | orchestrator | 2026-01-01 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:33.734770 | orchestrator | 2026-01-01 02:02:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:33.737043 | orchestrator | 2026-01-01 02:02:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:33.737066 | orchestrator | 2026-01-01 02:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:36.787061 | orchestrator | 2026-01-01 02:02:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:36.790974 | orchestrator | 2026-01-01 02:02:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:36.791313 | orchestrator | 2026-01-01 02:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:39.841274 | orchestrator | 2026-01-01 02:02:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:39.843272 | orchestrator | 2026-01-01 02:02:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:39.843307 | orchestrator | 2026-01-01 02:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:42.896032 | orchestrator | 2026-01-01 02:02:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:42.897590 | orchestrator | 2026-01-01 02:02:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:42.897691 | orchestrator | 2026-01-01 02:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:45.950375 | orchestrator | 2026-01-01 02:02:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:45.950894 | orchestrator | 2026-01-01 02:02:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:45.950933 | orchestrator | 2026-01-01 02:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:49.002860 | orchestrator | 2026-01-01 02:02:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:49.005106 | orchestrator | 2026-01-01 02:02:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:49.005142 | orchestrator | 2026-01-01 02:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:52.050540 | orchestrator | 2026-01-01 02:02:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:52.053442 | orchestrator | 2026-01-01 02:02:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:52.053934 | orchestrator | 2026-01-01 02:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:55.096895 | orchestrator | 2026-01-01 02:02:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:55.099255 | orchestrator | 2026-01-01 02:02:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:55.099451 | orchestrator | 2026-01-01 02:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:02:58.152694 | orchestrator | 2026-01-01 02:02:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:02:58.154980 | orchestrator | 2026-01-01 02:02:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:02:58.155033 | orchestrator | 2026-01-01 02:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:01.198958 | orchestrator | 2026-01-01 02:03:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:01.200446 | orchestrator | 2026-01-01 02:03:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:01.200505 | orchestrator | 2026-01-01 02:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:04.254437 | orchestrator | 2026-01-01 02:03:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:04.257312 | orchestrator | 2026-01-01 02:03:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:04.257380 | orchestrator | 2026-01-01 02:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:07.305225 | orchestrator | 2026-01-01 02:03:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:07.306529 | orchestrator | 2026-01-01 02:03:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:07.306577 | orchestrator | 2026-01-01 02:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:10.357426 | orchestrator | 2026-01-01 02:03:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:10.359795 | orchestrator | 2026-01-01 02:03:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:10.359819 | orchestrator | 2026-01-01 02:03:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:13.406374 | orchestrator | 2026-01-01 02:03:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:13.410098 | orchestrator | 2026-01-01 02:03:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:13.410134 | orchestrator | 2026-01-01 02:03:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:16.462101 | orchestrator | 2026-01-01 02:03:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:16.466448 | orchestrator | 2026-01-01 02:03:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:16.466551 | orchestrator | 2026-01-01 02:03:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:19.518543 | orchestrator | 2026-01-01 02:03:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:19.520858 | orchestrator | 2026-01-01 02:03:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:19.520896 | orchestrator | 2026-01-01 02:03:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:22.569161 | orchestrator | 2026-01-01 02:03:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:22.570209 | orchestrator | 2026-01-01 02:03:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:22.570285 | orchestrator | 2026-01-01 02:03:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:25.623112 | orchestrator | 2026-01-01 02:03:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:25.624867 | orchestrator | 2026-01-01 02:03:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:25.624909 | orchestrator | 2026-01-01 02:03:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:28.669291 | orchestrator | 2026-01-01 02:03:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:28.671003 | orchestrator | 2026-01-01 02:03:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:28.671678 | orchestrator | 2026-01-01 02:03:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:31.721850 | orchestrator | 2026-01-01 02:03:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:31.722322 | orchestrator | 2026-01-01 02:03:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:31.722360 | orchestrator | 2026-01-01 02:03:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:34.776119 | orchestrator | 2026-01-01 02:03:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:34.776677 | orchestrator | 2026-01-01 02:03:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:34.776751 | orchestrator | 2026-01-01 02:03:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:37.825752 | orchestrator | 2026-01-01 02:03:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:37.826677 | orchestrator | 2026-01-01 02:03:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:37.826720 | orchestrator | 2026-01-01 02:03:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:40.872491 | orchestrator | 2026-01-01 02:03:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:40.874963 | orchestrator | 2026-01-01 02:03:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:40.875026 | orchestrator | 2026-01-01 02:03:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:43.928852 | orchestrator | 2026-01-01 02:03:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:43.931825 | orchestrator | 2026-01-01 02:03:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:43.932041 | orchestrator | 2026-01-01 02:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:46.981223 | orchestrator | 2026-01-01 02:03:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:46.982935 | orchestrator | 2026-01-01 02:03:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:46.982983 | orchestrator | 2026-01-01 02:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:50.038189 | orchestrator | 2026-01-01 02:03:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:50.038351 | orchestrator | 2026-01-01 02:03:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:50.038364 | orchestrator | 2026-01-01 02:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:53.089480 | orchestrator | 2026-01-01 02:03:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:53.091772 | orchestrator | 2026-01-01 02:03:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:53.091811 | orchestrator | 2026-01-01 02:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:56.140700 | orchestrator | 2026-01-01 02:03:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:56.141053 | orchestrator | 2026-01-01 02:03:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:56.141536 | orchestrator | 2026-01-01 02:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:03:59.193225 | orchestrator | 2026-01-01 02:03:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:03:59.196396 | orchestrator | 2026-01-01 02:03:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:03:59.196765 | orchestrator | 2026-01-01 02:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:02.243711 | orchestrator | 2026-01-01 02:04:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:02.245622 | orchestrator | 2026-01-01 02:04:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:02.245679 | orchestrator | 2026-01-01 02:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:05.290766 | orchestrator | 2026-01-01 02:04:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:05.292418 | orchestrator | 2026-01-01 02:04:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:05.292503 | orchestrator | 2026-01-01 02:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:08.338378 | orchestrator | 2026-01-01 02:04:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:08.340151 | orchestrator | 2026-01-01 02:04:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:08.340206 | orchestrator | 2026-01-01 02:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:11.391932 | orchestrator | 2026-01-01 02:04:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:11.397712 | orchestrator | 2026-01-01 02:04:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:11.397781 | orchestrator | 2026-01-01 02:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:14.450130 | orchestrator | 2026-01-01 02:04:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:14.452312 | orchestrator | 2026-01-01 02:04:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:14.452427 | orchestrator | 2026-01-01 02:04:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:17.504658 | orchestrator | 2026-01-01 02:04:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:17.508844 | orchestrator | 2026-01-01 02:04:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:17.509247 | orchestrator | 2026-01-01 02:04:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:20.554863 | orchestrator | 2026-01-01 02:04:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:20.557911 | orchestrator | 2026-01-01 02:04:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:20.557978 | orchestrator | 2026-01-01 02:04:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:23.607723 | orchestrator | 2026-01-01 02:04:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:23.609937 | orchestrator | 2026-01-01 02:04:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:23.610008 | orchestrator | 2026-01-01 02:04:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:26.655279 | orchestrator | 2026-01-01 02:04:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:26.656947 | orchestrator | 2026-01-01 02:04:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:26.657046 | orchestrator | 2026-01-01 02:04:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:29.699249 | orchestrator | 2026-01-01 02:04:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:29.701261 | orchestrator | 2026-01-01 02:04:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:29.701318 | orchestrator | 2026-01-01 02:04:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:32.752165 | orchestrator | 2026-01-01 02:04:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:32.754226 | orchestrator | 2026-01-01 02:04:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:32.754272 | orchestrator | 2026-01-01 02:04:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:35.807175 | orchestrator | 2026-01-01 02:04:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:35.809628 | orchestrator | 2026-01-01 02:04:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:35.809705 | orchestrator | 2026-01-01 02:04:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:38.851167 | orchestrator | 2026-01-01 02:04:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:38.853066 | orchestrator | 2026-01-01 02:04:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:38.853102 | orchestrator | 2026-01-01 02:04:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:41.893472 | orchestrator | 2026-01-01 02:04:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:41.894950 | orchestrator | 2026-01-01 02:04:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:41.894998 | orchestrator | 2026-01-01 02:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:44.933240 | orchestrator | 2026-01-01 02:04:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:44.933936 | orchestrator | 2026-01-01 02:04:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:44.934009 | orchestrator | 2026-01-01 02:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:47.983974 | orchestrator | 2026-01-01 02:04:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:47.985339 | orchestrator | 2026-01-01 02:04:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:47.985755 | orchestrator | 2026-01-01 02:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:51.034462 | orchestrator | 2026-01-01 02:04:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:51.036448 | orchestrator | 2026-01-01 02:04:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:51.036953 | orchestrator | 2026-01-01 02:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:54.083912 | orchestrator | 2026-01-01 02:04:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:54.087442 | orchestrator | 2026-01-01 02:04:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:54.087509 | orchestrator | 2026-01-01 02:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:04:57.138859 | orchestrator | 2026-01-01 02:04:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:04:57.141651 | orchestrator | 2026-01-01 02:04:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:04:57.141697 | orchestrator | 2026-01-01 02:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:00.180819 | orchestrator | 2026-01-01 02:05:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:00.182934 | orchestrator | 2026-01-01 02:05:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:00.182968 | orchestrator | 2026-01-01 02:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:03.227870 | orchestrator | 2026-01-01 02:05:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:03.230382 | orchestrator | 2026-01-01 02:05:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:03.230511 | orchestrator | 2026-01-01 02:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:06.281554 | orchestrator | 2026-01-01 02:05:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:06.284127 | orchestrator | 2026-01-01 02:05:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:06.284219 | orchestrator | 2026-01-01 02:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:09.335112 | orchestrator | 2026-01-01 02:05:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:09.337635 | orchestrator | 2026-01-01 02:05:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:09.337677 | orchestrator | 2026-01-01 02:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:12.384240 | orchestrator | 2026-01-01 02:05:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:12.385477 | orchestrator | 2026-01-01 02:05:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:12.385783 | orchestrator | 2026-01-01 02:05:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:15.431750 | orchestrator | 2026-01-01 02:05:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:15.433378 | orchestrator | 2026-01-01 02:05:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:15.433459 | orchestrator | 2026-01-01 02:05:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:18.479687 | orchestrator | 2026-01-01 02:05:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:18.480547 | orchestrator | 2026-01-01 02:05:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:18.480610 | orchestrator | 2026-01-01 02:05:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:21.526674 | orchestrator | 2026-01-01 02:05:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:21.529844 | orchestrator | 2026-01-01 02:05:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:21.529885 | orchestrator | 2026-01-01 02:05:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:24.580755 | orchestrator | 2026-01-01 02:05:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:24.583054 | orchestrator | 2026-01-01 02:05:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:24.583105 | orchestrator | 2026-01-01 02:05:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:27.628397 | orchestrator | 2026-01-01 02:05:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:27.629453 | orchestrator | 2026-01-01 02:05:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:27.629491 | orchestrator | 2026-01-01 02:05:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:30.681495 | orchestrator | 2026-01-01 02:05:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:30.683305 | orchestrator | 2026-01-01 02:05:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:30.683355 | orchestrator | 2026-01-01 02:05:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:33.728243 | orchestrator | 2026-01-01 02:05:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:33.729241 | orchestrator | 2026-01-01 02:05:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:33.729282 | orchestrator | 2026-01-01 02:05:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:36.785022 | orchestrator | 2026-01-01 02:05:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:36.787837 | orchestrator | 2026-01-01 02:05:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:36.787924 | orchestrator | 2026-01-01 02:05:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:39.831343 | orchestrator | 2026-01-01 02:05:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:39.832918 | orchestrator | 2026-01-01 02:05:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:39.832971 | orchestrator | 2026-01-01 02:05:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:42.878302 | orchestrator | 2026-01-01 02:05:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:42.880834 | orchestrator | 2026-01-01 02:05:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:42.880918 | orchestrator | 2026-01-01 02:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:45.929309 | orchestrator | 2026-01-01 02:05:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:45.931365 | orchestrator | 2026-01-01 02:05:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:45.931529 | orchestrator | 2026-01-01 02:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:48.981876 | orchestrator | 2026-01-01 02:05:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:48.984046 | orchestrator | 2026-01-01 02:05:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:48.984225 | orchestrator | 2026-01-01 02:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:52.032072 | orchestrator | 2026-01-01 02:05:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:52.034136 | orchestrator | 2026-01-01 02:05:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:52.034190 | orchestrator | 2026-01-01 02:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:55.083062 | orchestrator | 2026-01-01 02:05:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:55.084727 | orchestrator | 2026-01-01 02:05:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:55.084799 | orchestrator | 2026-01-01 02:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:05:58.130746 | orchestrator | 2026-01-01 02:05:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:05:58.132986 | orchestrator | 2026-01-01 02:05:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:05:58.133036 | orchestrator | 2026-01-01 02:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:01.182235 | orchestrator | 2026-01-01 02:06:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:01.184976 | orchestrator | 2026-01-01 02:06:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:01.185349 | orchestrator | 2026-01-01 02:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:04.237043 | orchestrator | 2026-01-01 02:06:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:04.238472 | orchestrator | 2026-01-01 02:06:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:04.238518 | orchestrator | 2026-01-01 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:07.290309 | orchestrator | 2026-01-01 02:06:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:07.293360 | orchestrator | 2026-01-01 02:06:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:07.293434 | orchestrator | 2026-01-01 02:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:10.351255 | orchestrator | 2026-01-01 02:06:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:10.352792 | orchestrator | 2026-01-01 02:06:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:10.352835 | orchestrator | 2026-01-01 02:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:13.401795 | orchestrator | 2026-01-01 02:06:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:13.403894 | orchestrator | 2026-01-01 02:06:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:13.403967 | orchestrator | 2026-01-01 02:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:16.444724 | orchestrator | 2026-01-01 02:06:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:16.446406 | orchestrator | 2026-01-01 02:06:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:16.446477 | orchestrator | 2026-01-01 02:06:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:19.491927 | orchestrator | 2026-01-01 02:06:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:19.493405 | orchestrator | 2026-01-01 02:06:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:19.493485 | orchestrator | 2026-01-01 02:06:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:22.539760 | orchestrator | 2026-01-01 02:06:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:22.540145 | orchestrator | 2026-01-01 02:06:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:22.540430 | orchestrator | 2026-01-01 02:06:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:25.585232 | orchestrator | 2026-01-01 02:06:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:25.586152 | orchestrator | 2026-01-01 02:06:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:25.586187 | orchestrator | 2026-01-01 02:06:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:28.633662 | orchestrator | 2026-01-01 02:06:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:28.635749 | orchestrator | 2026-01-01 02:06:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:28.636045 | orchestrator | 2026-01-01 02:06:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:31.688449 | orchestrator | 2026-01-01 02:06:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:31.690390 | orchestrator | 2026-01-01 02:06:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:31.690480 | orchestrator | 2026-01-01 02:06:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:34.738624 | orchestrator | 2026-01-01 02:06:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:34.742602 | orchestrator | 2026-01-01 02:06:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:34.742735 | orchestrator | 2026-01-01 02:06:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:37.782842 | orchestrator | 2026-01-01 02:06:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:37.784626 | orchestrator | 2026-01-01 02:06:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:37.784662 | orchestrator | 2026-01-01 02:06:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:40.847646 | orchestrator | 2026-01-01 02:06:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:40.848379 | orchestrator | 2026-01-01 02:06:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:40.848410 | orchestrator | 2026-01-01 02:06:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:43.902308 | orchestrator | 2026-01-01 02:06:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:43.905371 | orchestrator | 2026-01-01 02:06:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:43.905960 | orchestrator | 2026-01-01 02:06:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:46.949649 | orchestrator | 2026-01-01 02:06:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:46.952324 | orchestrator | 2026-01-01 02:06:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:46.952370 | orchestrator | 2026-01-01 02:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:50.015246 | orchestrator | 2026-01-01 02:06:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:50.016735 | orchestrator | 2026-01-01 02:06:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:50.016843 | orchestrator | 2026-01-01 02:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:53.059067 | orchestrator | 2026-01-01 02:06:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:53.063667 | orchestrator | 2026-01-01 02:06:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:53.063733 | orchestrator | 2026-01-01 02:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:56.107859 | orchestrator | 2026-01-01 02:06:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:56.108921 | orchestrator | 2026-01-01 02:06:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:56.108966 | orchestrator | 2026-01-01 02:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:06:59.170914 | orchestrator | 2026-01-01 02:06:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:06:59.172633 | orchestrator | 2026-01-01 02:06:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:06:59.172672 | orchestrator | 2026-01-01 02:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:02.219834 | orchestrator | 2026-01-01 02:07:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:02.221615 | orchestrator | 2026-01-01 02:07:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:02.221668 | orchestrator | 2026-01-01 02:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:05.276077 | orchestrator | 2026-01-01 02:07:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:05.278637 | orchestrator | 2026-01-01 02:07:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:05.278715 | orchestrator | 2026-01-01 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:08.323970 | orchestrator | 2026-01-01 02:07:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:08.325711 | orchestrator | 2026-01-01 02:07:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:08.325867 | orchestrator | 2026-01-01 02:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:11.371902 | orchestrator | 2026-01-01 02:07:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:11.372919 | orchestrator | 2026-01-01 02:07:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:11.372996 | orchestrator | 2026-01-01 02:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:14.420465 | orchestrator | 2026-01-01 02:07:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:14.422655 | orchestrator | 2026-01-01 02:07:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:14.422707 | orchestrator | 2026-01-01 02:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:17.464595 | orchestrator | 2026-01-01 02:07:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:17.466154 | orchestrator | 2026-01-01 02:07:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:17.466237 | orchestrator | 2026-01-01 02:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:20.516433 | orchestrator | 2026-01-01 02:07:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:20.517978 | orchestrator | 2026-01-01 02:07:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:20.518113 | orchestrator | 2026-01-01 02:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:23.569890 | orchestrator | 2026-01-01 02:07:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:23.572321 | orchestrator | 2026-01-01 02:07:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:23.572379 | orchestrator | 2026-01-01 02:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:26.614562 | orchestrator | 2026-01-01 02:07:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:26.616721 | orchestrator | 2026-01-01 02:07:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:26.616753 | orchestrator | 2026-01-01 02:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:29.667475 | orchestrator | 2026-01-01 02:07:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:29.669346 | orchestrator | 2026-01-01 02:07:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:29.669394 | orchestrator | 2026-01-01 02:07:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:32.718277 | orchestrator | 2026-01-01 02:07:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:32.720874 | orchestrator | 2026-01-01 02:07:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:32.720927 | orchestrator | 2026-01-01 02:07:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:35.768290 | orchestrator | 2026-01-01 02:07:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:35.770683 | orchestrator | 2026-01-01 02:07:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:35.770744 | orchestrator | 2026-01-01 02:07:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:38.819152 | orchestrator | 2026-01-01 02:07:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:38.821872 | orchestrator | 2026-01-01 02:07:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:38.821927 | orchestrator | 2026-01-01 02:07:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:41.867114 | orchestrator | 2026-01-01 02:07:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:41.868410 | orchestrator | 2026-01-01 02:07:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:41.868471 | orchestrator | 2026-01-01 02:07:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:44.922390 | orchestrator | 2026-01-01 02:07:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:44.924057 | orchestrator | 2026-01-01 02:07:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:44.924090 | orchestrator | 2026-01-01 02:07:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:47.973296 | orchestrator | 2026-01-01 02:07:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:47.976431 | orchestrator | 2026-01-01 02:07:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:47.976488 | orchestrator | 2026-01-01 02:07:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:51.029959 | orchestrator | 2026-01-01 02:07:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:51.030871 | orchestrator | 2026-01-01 02:07:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:51.030994 | orchestrator | 2026-01-01 02:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:54.071958 | orchestrator | 2026-01-01 02:07:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:54.074862 | orchestrator | 2026-01-01 02:07:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:54.075002 | orchestrator | 2026-01-01 02:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:07:57.115163 | orchestrator | 2026-01-01 02:07:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:07:57.118822 | orchestrator | 2026-01-01 02:07:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:07:57.118866 | orchestrator | 2026-01-01 02:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:00.171760 | orchestrator | 2026-01-01 02:08:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:00.174105 | orchestrator | 2026-01-01 02:08:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:00.174244 | orchestrator | 2026-01-01 02:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:03.222435 | orchestrator | 2026-01-01 02:08:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:03.223811 | orchestrator | 2026-01-01 02:08:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:03.223951 | orchestrator | 2026-01-01 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:06.275310 | orchestrator | 2026-01-01 02:08:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:06.276700 | orchestrator | 2026-01-01 02:08:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:06.276744 | orchestrator | 2026-01-01 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:09.330706 | orchestrator | 2026-01-01 02:08:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:09.332862 | orchestrator | 2026-01-01 02:08:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:09.333231 | orchestrator | 2026-01-01 02:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:12.383882 | orchestrator | 2026-01-01 02:08:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:12.385850 | orchestrator | 2026-01-01 02:08:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:12.385896 | orchestrator | 2026-01-01 02:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:15.431214 | orchestrator | 2026-01-01 02:08:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:15.432271 | orchestrator | 2026-01-01 02:08:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:15.432285 | orchestrator | 2026-01-01 02:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:18.479735 | orchestrator | 2026-01-01 02:08:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:18.481826 | orchestrator | 2026-01-01 02:08:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:18.481909 | orchestrator | 2026-01-01 02:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:21.524464 | orchestrator | 2026-01-01 02:08:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:21.527666 | orchestrator | 2026-01-01 02:08:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:21.527721 | orchestrator | 2026-01-01 02:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:24.572675 | orchestrator | 2026-01-01 02:08:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:24.574336 | orchestrator | 2026-01-01 02:08:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:24.574391 | orchestrator | 2026-01-01 02:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:27.620972 | orchestrator | 2026-01-01 02:08:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:27.622797 | orchestrator | 2026-01-01 02:08:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:27.622831 | orchestrator | 2026-01-01 02:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:30.674831 | orchestrator | 2026-01-01 02:08:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:30.676643 | orchestrator | 2026-01-01 02:08:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:30.676871 | orchestrator | 2026-01-01 02:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:33.735653 | orchestrator | 2026-01-01 02:08:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:33.740422 | orchestrator | 2026-01-01 02:08:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:33.740574 | orchestrator | 2026-01-01 02:08:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:36.797683 | orchestrator | 2026-01-01 02:08:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:36.797934 | orchestrator | 2026-01-01 02:08:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:36.797973 | orchestrator | 2026-01-01 02:08:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:39.858642 | orchestrator | 2026-01-01 02:08:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:39.859539 | orchestrator | 2026-01-01 02:08:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:39.859571 | orchestrator | 2026-01-01 02:08:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:42.909428 | orchestrator | 2026-01-01 02:08:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:42.911171 | orchestrator | 2026-01-01 02:08:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:42.911240 | orchestrator | 2026-01-01 02:08:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:45.956149 | orchestrator | 2026-01-01 02:08:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:45.957906 | orchestrator | 2026-01-01 02:08:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:45.957941 | orchestrator | 2026-01-01 02:08:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:49.010290 | orchestrator | 2026-01-01 02:08:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:49.013677 | orchestrator | 2026-01-01 02:08:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:49.013739 | orchestrator | 2026-01-01 02:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:52.059493 | orchestrator | 2026-01-01 02:08:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:52.061525 | orchestrator | 2026-01-01 02:08:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:52.061671 | orchestrator | 2026-01-01 02:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:55.111951 | orchestrator | 2026-01-01 02:08:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:55.113485 | orchestrator | 2026-01-01 02:08:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:55.113550 | orchestrator | 2026-01-01 02:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:08:58.169574 | orchestrator | 2026-01-01 02:08:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:08:58.171455 | orchestrator | 2026-01-01 02:08:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:08:58.171561 | orchestrator | 2026-01-01 02:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:01.227017 | orchestrator | 2026-01-01 02:09:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:01.228261 | orchestrator | 2026-01-01 02:09:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:01.228390 | orchestrator | 2026-01-01 02:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:04.278291 | orchestrator | 2026-01-01 02:09:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:04.281015 | orchestrator | 2026-01-01 02:09:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:04.281068 | orchestrator | 2026-01-01 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:07.321387 | orchestrator | 2026-01-01 02:09:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:07.324265 | orchestrator | 2026-01-01 02:09:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:07.324320 | orchestrator | 2026-01-01 02:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:10.375935 | orchestrator | 2026-01-01 02:09:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:10.378183 | orchestrator | 2026-01-01 02:09:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:10.378330 | orchestrator | 2026-01-01 02:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:13.415015 | orchestrator | 2026-01-01 02:09:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:13.416807 | orchestrator | 2026-01-01 02:09:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:13.416872 | orchestrator | 2026-01-01 02:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:16.468669 | orchestrator | 2026-01-01 02:09:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:16.470310 | orchestrator | 2026-01-01 02:09:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:16.470382 | orchestrator | 2026-01-01 02:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:19.520945 | orchestrator | 2026-01-01 02:09:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:19.523414 | orchestrator | 2026-01-01 02:09:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:19.523554 | orchestrator | 2026-01-01 02:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:22.567222 | orchestrator | 2026-01-01 02:09:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:22.568924 | orchestrator | 2026-01-01 02:09:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:22.568944 | orchestrator | 2026-01-01 02:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:25.613912 | orchestrator | 2026-01-01 02:09:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:25.616145 | orchestrator | 2026-01-01 02:09:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:25.616289 | orchestrator | 2026-01-01 02:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:28.664019 | orchestrator | 2026-01-01 02:09:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:28.667684 | orchestrator | 2026-01-01 02:09:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:28.667764 | orchestrator | 2026-01-01 02:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:31.732212 | orchestrator | 2026-01-01 02:09:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:31.734357 | orchestrator | 2026-01-01 02:09:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:31.734421 | orchestrator | 2026-01-01 02:09:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:34.777260 | orchestrator | 2026-01-01 02:09:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:34.779339 | orchestrator | 2026-01-01 02:09:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:34.779384 | orchestrator | 2026-01-01 02:09:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:37.835993 | orchestrator | 2026-01-01 02:09:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:37.838229 | orchestrator | 2026-01-01 02:09:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:37.838272 | orchestrator | 2026-01-01 02:09:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:40.888093 | orchestrator | 2026-01-01 02:09:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:40.892250 | orchestrator | 2026-01-01 02:09:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:40.892377 | orchestrator | 2026-01-01 02:09:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:43.947789 | orchestrator | 2026-01-01 02:09:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:43.949685 | orchestrator | 2026-01-01 02:09:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:43.949744 | orchestrator | 2026-01-01 02:09:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:47.009738 | orchestrator | 2026-01-01 02:09:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:47.010640 | orchestrator | 2026-01-01 02:09:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:47.010674 | orchestrator | 2026-01-01 02:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:50.062277 | orchestrator | 2026-01-01 02:09:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:50.064423 | orchestrator | 2026-01-01 02:09:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:50.064640 | orchestrator | 2026-01-01 02:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:53.110761 | orchestrator | 2026-01-01 02:09:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:53.113516 | orchestrator | 2026-01-01 02:09:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:53.113558 | orchestrator | 2026-01-01 02:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:56.162852 | orchestrator | 2026-01-01 02:09:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:56.163943 | orchestrator | 2026-01-01 02:09:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:56.163976 | orchestrator | 2026-01-01 02:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:09:59.217265 | orchestrator | 2026-01-01 02:09:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:09:59.219607 | orchestrator | 2026-01-01 02:09:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:09:59.220126 | orchestrator | 2026-01-01 02:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:02.269322 | orchestrator | 2026-01-01 02:10:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:02.270796 | orchestrator | 2026-01-01 02:10:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:02.270833 | orchestrator | 2026-01-01 02:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:05.318284 | orchestrator | 2026-01-01 02:10:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:05.318647 | orchestrator | 2026-01-01 02:10:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:05.318666 | orchestrator | 2026-01-01 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:08.363485 | orchestrator | 2026-01-01 02:10:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:08.365386 | orchestrator | 2026-01-01 02:10:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:08.365528 | orchestrator | 2026-01-01 02:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:11.427120 | orchestrator | 2026-01-01 02:10:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:11.431582 | orchestrator | 2026-01-01 02:10:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:11.432075 | orchestrator | 2026-01-01 02:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:14.476985 | orchestrator | 2026-01-01 02:10:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:14.479543 | orchestrator | 2026-01-01 02:10:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:14.479579 | orchestrator | 2026-01-01 02:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:17.527133 | orchestrator | 2026-01-01 02:10:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:17.528778 | orchestrator | 2026-01-01 02:10:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:17.528829 | orchestrator | 2026-01-01 02:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:20.576251 | orchestrator | 2026-01-01 02:10:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:20.576607 | orchestrator | 2026-01-01 02:10:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:20.576654 | orchestrator | 2026-01-01 02:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:23.631166 | orchestrator | 2026-01-01 02:10:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:23.634246 | orchestrator | 2026-01-01 02:10:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:23.634404 | orchestrator | 2026-01-01 02:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:26.684480 | orchestrator | 2026-01-01 02:10:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:26.685362 | orchestrator | 2026-01-01 02:10:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:26.685449 | orchestrator | 2026-01-01 02:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:29.733714 | orchestrator | 2026-01-01 02:10:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:29.737699 | orchestrator | 2026-01-01 02:10:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:29.737759 | orchestrator | 2026-01-01 02:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:32.785701 | orchestrator | 2026-01-01 02:10:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:32.787799 | orchestrator | 2026-01-01 02:10:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:32.787855 | orchestrator | 2026-01-01 02:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:35.837127 | orchestrator | 2026-01-01 02:10:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:35.838083 | orchestrator | 2026-01-01 02:10:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:35.838098 | orchestrator | 2026-01-01 02:10:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:38.886205 | orchestrator | 2026-01-01 02:10:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:38.887810 | orchestrator | 2026-01-01 02:10:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:38.887864 | orchestrator | 2026-01-01 02:10:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:41.932277 | orchestrator | 2026-01-01 02:10:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:41.935274 | orchestrator | 2026-01-01 02:10:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:41.935348 | orchestrator | 2026-01-01 02:10:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:44.991194 | orchestrator | 2026-01-01 02:10:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:44.993879 | orchestrator | 2026-01-01 02:10:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:44.993953 | orchestrator | 2026-01-01 02:10:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:48.055184 | orchestrator | 2026-01-01 02:10:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:48.056295 | orchestrator | 2026-01-01 02:10:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:48.056327 | orchestrator | 2026-01-01 02:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:51.104265 | orchestrator | 2026-01-01 02:10:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:51.105413 | orchestrator | 2026-01-01 02:10:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:51.105487 | orchestrator | 2026-01-01 02:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:54.146387 | orchestrator | 2026-01-01 02:10:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:54.148457 | orchestrator | 2026-01-01 02:10:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:54.148514 | orchestrator | 2026-01-01 02:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:10:57.195713 | orchestrator | 2026-01-01 02:10:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:10:57.200121 | orchestrator | 2026-01-01 02:10:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:10:57.200193 | orchestrator | 2026-01-01 02:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:00.251513 | orchestrator | 2026-01-01 02:11:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:00.252593 | orchestrator | 2026-01-01 02:11:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:00.252619 | orchestrator | 2026-01-01 02:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:03.305121 | orchestrator | 2026-01-01 02:11:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:03.307693 | orchestrator | 2026-01-01 02:11:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:03.307747 | orchestrator | 2026-01-01 02:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:06.356181 | orchestrator | 2026-01-01 02:11:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:06.358919 | orchestrator | 2026-01-01 02:11:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:06.358977 | orchestrator | 2026-01-01 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:09.409344 | orchestrator | 2026-01-01 02:11:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:09.410900 | orchestrator | 2026-01-01 02:11:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:09.410998 | orchestrator | 2026-01-01 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:12.471801 | orchestrator | 2026-01-01 02:11:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:12.473914 | orchestrator | 2026-01-01 02:11:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:12.473968 | orchestrator | 2026-01-01 02:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:15.519354 | orchestrator | 2026-01-01 02:11:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:15.521904 | orchestrator | 2026-01-01 02:11:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:15.522078 | orchestrator | 2026-01-01 02:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:18.572684 | orchestrator | 2026-01-01 02:11:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:18.574450 | orchestrator | 2026-01-01 02:11:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:18.574489 | orchestrator | 2026-01-01 02:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:21.623724 | orchestrator | 2026-01-01 02:11:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:21.625530 | orchestrator | 2026-01-01 02:11:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:21.625596 | orchestrator | 2026-01-01 02:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:24.672683 | orchestrator | 2026-01-01 02:11:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:24.674827 | orchestrator | 2026-01-01 02:11:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:24.674957 | orchestrator | 2026-01-01 02:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:27.722334 | orchestrator | 2026-01-01 02:11:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:27.723927 | orchestrator | 2026-01-01 02:11:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:27.723981 | orchestrator | 2026-01-01 02:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:30.773975 | orchestrator | 2026-01-01 02:11:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:30.776444 | orchestrator | 2026-01-01 02:11:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:30.776569 | orchestrator | 2026-01-01 02:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:33.827006 | orchestrator | 2026-01-01 02:11:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:33.828783 | orchestrator | 2026-01-01 02:11:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:33.828988 | orchestrator | 2026-01-01 02:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:36.874950 | orchestrator | 2026-01-01 02:11:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:36.875311 | orchestrator | 2026-01-01 02:11:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:36.875896 | orchestrator | 2026-01-01 02:11:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:39.927800 | orchestrator | 2026-01-01 02:11:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:39.929839 | orchestrator | 2026-01-01 02:11:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:39.929972 | orchestrator | 2026-01-01 02:11:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:42.985850 | orchestrator | 2026-01-01 02:11:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:42.988320 | orchestrator | 2026-01-01 02:11:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:42.988434 | orchestrator | 2026-01-01 02:11:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:46.043534 | orchestrator | 2026-01-01 02:11:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:46.044825 | orchestrator | 2026-01-01 02:11:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:46.044859 | orchestrator | 2026-01-01 02:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:49.090577 | orchestrator | 2026-01-01 02:11:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:49.092171 | orchestrator | 2026-01-01 02:11:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:49.092230 | orchestrator | 2026-01-01 02:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:52.139577 | orchestrator | 2026-01-01 02:11:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:52.141671 | orchestrator | 2026-01-01 02:11:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:52.141719 | orchestrator | 2026-01-01 02:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:55.191725 | orchestrator | 2026-01-01 02:11:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:55.192482 | orchestrator | 2026-01-01 02:11:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:55.194085 | orchestrator | 2026-01-01 02:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:11:58.241771 | orchestrator | 2026-01-01 02:11:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:11:58.243730 | orchestrator | 2026-01-01 02:11:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:11:58.243794 | orchestrator | 2026-01-01 02:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:01.289659 | orchestrator | 2026-01-01 02:12:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:01.290164 | orchestrator | 2026-01-01 02:12:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:01.290183 | orchestrator | 2026-01-01 02:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:04.337589 | orchestrator | 2026-01-01 02:12:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:04.339975 | orchestrator | 2026-01-01 02:12:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:04.340008 | orchestrator | 2026-01-01 02:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:07.388224 | orchestrator | 2026-01-01 02:12:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:07.390707 | orchestrator | 2026-01-01 02:12:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:07.390781 | orchestrator | 2026-01-01 02:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:10.434662 | orchestrator | 2026-01-01 02:12:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:10.437328 | orchestrator | 2026-01-01 02:12:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:10.437986 | orchestrator | 2026-01-01 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:13.490850 | orchestrator | 2026-01-01 02:12:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:13.492434 | orchestrator | 2026-01-01 02:12:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:13.492672 | orchestrator | 2026-01-01 02:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:16.543359 | orchestrator | 2026-01-01 02:12:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:16.546192 | orchestrator | 2026-01-01 02:12:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:16.546246 | orchestrator | 2026-01-01 02:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:19.594249 | orchestrator | 2026-01-01 02:12:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:19.597553 | orchestrator | 2026-01-01 02:12:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:19.597591 | orchestrator | 2026-01-01 02:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:22.647185 | orchestrator | 2026-01-01 02:12:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:22.647348 | orchestrator | 2026-01-01 02:12:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:22.648701 | orchestrator | 2026-01-01 02:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:25.695015 | orchestrator | 2026-01-01 02:12:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:25.696752 | orchestrator | 2026-01-01 02:12:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:25.697103 | orchestrator | 2026-01-01 02:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:28.745327 | orchestrator | 2026-01-01 02:12:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:28.747564 | orchestrator | 2026-01-01 02:12:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:28.747607 | orchestrator | 2026-01-01 02:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:31.794130 | orchestrator | 2026-01-01 02:12:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:31.795814 | orchestrator | 2026-01-01 02:12:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:31.795851 | orchestrator | 2026-01-01 02:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:34.846610 | orchestrator | 2026-01-01 02:12:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:34.846791 | orchestrator | 2026-01-01 02:12:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:34.846810 | orchestrator | 2026-01-01 02:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:37.901801 | orchestrator | 2026-01-01 02:12:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:37.903967 | orchestrator | 2026-01-01 02:12:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:37.904052 | orchestrator | 2026-01-01 02:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:40.950867 | orchestrator | 2026-01-01 02:12:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:40.953182 | orchestrator | 2026-01-01 02:12:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:40.953231 | orchestrator | 2026-01-01 02:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:44.010967 | orchestrator | 2026-01-01 02:12:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:44.013868 | orchestrator | 2026-01-01 02:12:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:44.013995 | orchestrator | 2026-01-01 02:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:47.060167 | orchestrator | 2026-01-01 02:12:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:47.061854 | orchestrator | 2026-01-01 02:12:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:47.061908 | orchestrator | 2026-01-01 02:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:50.109766 | orchestrator | 2026-01-01 02:12:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:50.112626 | orchestrator | 2026-01-01 02:12:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:50.112735 | orchestrator | 2026-01-01 02:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:53.169135 | orchestrator | 2026-01-01 02:12:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:53.170954 | orchestrator | 2026-01-01 02:12:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:53.171010 | orchestrator | 2026-01-01 02:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:56.220449 | orchestrator | 2026-01-01 02:12:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:56.221662 | orchestrator | 2026-01-01 02:12:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:56.221703 | orchestrator | 2026-01-01 02:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:12:59.264054 | orchestrator | 2026-01-01 02:12:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:12:59.265401 | orchestrator | 2026-01-01 02:12:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:12:59.265432 | orchestrator | 2026-01-01 02:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:02.316401 | orchestrator | 2026-01-01 02:13:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:02.319393 | orchestrator | 2026-01-01 02:13:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:02.319630 | orchestrator | 2026-01-01 02:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:05.374540 | orchestrator | 2026-01-01 02:13:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:05.377453 | orchestrator | 2026-01-01 02:13:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:05.377540 | orchestrator | 2026-01-01 02:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:08.429169 | orchestrator | 2026-01-01 02:13:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:08.430787 | orchestrator | 2026-01-01 02:13:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:08.430886 | orchestrator | 2026-01-01 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:11.481630 | orchestrator | 2026-01-01 02:13:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:11.482936 | orchestrator | 2026-01-01 02:13:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:11.482983 | orchestrator | 2026-01-01 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:14.534518 | orchestrator | 2026-01-01 02:13:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:14.535760 | orchestrator | 2026-01-01 02:13:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:14.535814 | orchestrator | 2026-01-01 02:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:17.581093 | orchestrator | 2026-01-01 02:13:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:17.582666 | orchestrator | 2026-01-01 02:13:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:17.582717 | orchestrator | 2026-01-01 02:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:20.632767 | orchestrator | 2026-01-01 02:13:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:20.635274 | orchestrator | 2026-01-01 02:13:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:20.635304 | orchestrator | 2026-01-01 02:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:23.682143 | orchestrator | 2026-01-01 02:13:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:23.683731 | orchestrator | 2026-01-01 02:13:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:23.683799 | orchestrator | 2026-01-01 02:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:26.730749 | orchestrator | 2026-01-01 02:13:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:26.732880 | orchestrator | 2026-01-01 02:13:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:26.732958 | orchestrator | 2026-01-01 02:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:29.784959 | orchestrator | 2026-01-01 02:13:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:29.786125 | orchestrator | 2026-01-01 02:13:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:29.786171 | orchestrator | 2026-01-01 02:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:32.830511 | orchestrator | 2026-01-01 02:13:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:32.831279 | orchestrator | 2026-01-01 02:13:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:32.831303 | orchestrator | 2026-01-01 02:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:35.878069 | orchestrator | 2026-01-01 02:13:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:35.880287 | orchestrator | 2026-01-01 02:13:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:35.880380 | orchestrator | 2026-01-01 02:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:38.945065 | orchestrator | 2026-01-01 02:13:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:38.951524 | orchestrator | 2026-01-01 02:13:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:38.951614 | orchestrator | 2026-01-01 02:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:41.997451 | orchestrator | 2026-01-01 02:13:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:41.998687 | orchestrator | 2026-01-01 02:13:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:41.998753 | orchestrator | 2026-01-01 02:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:45.057083 | orchestrator | 2026-01-01 02:13:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:45.058227 | orchestrator | 2026-01-01 02:13:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:45.058286 | orchestrator | 2026-01-01 02:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:48.098905 | orchestrator | 2026-01-01 02:13:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:48.102661 | orchestrator | 2026-01-01 02:13:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:48.102760 | orchestrator | 2026-01-01 02:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:51.150670 | orchestrator | 2026-01-01 02:13:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:51.152629 | orchestrator | 2026-01-01 02:13:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:51.152685 | orchestrator | 2026-01-01 02:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:54.192491 | orchestrator | 2026-01-01 02:13:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:54.195628 | orchestrator | 2026-01-01 02:13:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:54.195725 | orchestrator | 2026-01-01 02:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:13:57.245988 | orchestrator | 2026-01-01 02:13:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:13:57.248324 | orchestrator | 2026-01-01 02:13:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:13:57.248410 | orchestrator | 2026-01-01 02:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:00.295086 | orchestrator | 2026-01-01 02:14:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:00.297220 | orchestrator | 2026-01-01 02:14:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:00.297265 | orchestrator | 2026-01-01 02:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:03.343430 | orchestrator | 2026-01-01 02:14:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:03.344169 | orchestrator | 2026-01-01 02:14:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:03.344197 | orchestrator | 2026-01-01 02:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:06.387219 | orchestrator | 2026-01-01 02:14:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:06.388684 | orchestrator | 2026-01-01 02:14:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:06.388799 | orchestrator | 2026-01-01 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:09.439434 | orchestrator | 2026-01-01 02:14:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:09.440678 | orchestrator | 2026-01-01 02:14:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:09.440739 | orchestrator | 2026-01-01 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:12.482904 | orchestrator | 2026-01-01 02:14:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:12.484699 | orchestrator | 2026-01-01 02:14:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:12.484842 | orchestrator | 2026-01-01 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:15.527570 | orchestrator | 2026-01-01 02:14:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:15.529399 | orchestrator | 2026-01-01 02:14:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:15.529479 | orchestrator | 2026-01-01 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:18.578522 | orchestrator | 2026-01-01 02:14:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:18.581446 | orchestrator | 2026-01-01 02:14:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:18.581479 | orchestrator | 2026-01-01 02:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:21.633313 | orchestrator | 2026-01-01 02:14:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:21.635361 | orchestrator | 2026-01-01 02:14:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:21.635398 | orchestrator | 2026-01-01 02:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:24.692549 | orchestrator | 2026-01-01 02:14:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:24.694716 | orchestrator | 2026-01-01 02:14:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:24.694735 | orchestrator | 2026-01-01 02:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:27.741828 | orchestrator | 2026-01-01 02:14:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:27.743783 | orchestrator | 2026-01-01 02:14:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:27.743897 | orchestrator | 2026-01-01 02:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:30.794892 | orchestrator | 2026-01-01 02:14:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:30.796885 | orchestrator | 2026-01-01 02:14:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:30.797010 | orchestrator | 2026-01-01 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:33.851471 | orchestrator | 2026-01-01 02:14:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:33.853875 | orchestrator | 2026-01-01 02:14:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:33.854143 | orchestrator | 2026-01-01 02:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:36.898843 | orchestrator | 2026-01-01 02:14:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:36.900923 | orchestrator | 2026-01-01 02:14:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:36.900954 | orchestrator | 2026-01-01 02:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:39.949805 | orchestrator | 2026-01-01 02:14:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:39.951789 | orchestrator | 2026-01-01 02:14:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:39.951840 | orchestrator | 2026-01-01 02:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:43.004407 | orchestrator | 2026-01-01 02:14:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:43.006093 | orchestrator | 2026-01-01 02:14:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:43.006486 | orchestrator | 2026-01-01 02:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:46.049953 | orchestrator | 2026-01-01 02:14:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:46.050437 | orchestrator | 2026-01-01 02:14:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:46.050481 | orchestrator | 2026-01-01 02:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:49.094570 | orchestrator | 2026-01-01 02:14:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:49.095302 | orchestrator | 2026-01-01 02:14:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:49.095573 | orchestrator | 2026-01-01 02:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:52.140693 | orchestrator | 2026-01-01 02:14:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:52.144464 | orchestrator | 2026-01-01 02:14:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:52.144584 | orchestrator | 2026-01-01 02:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:55.193742 | orchestrator | 2026-01-01 02:14:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:55.195004 | orchestrator | 2026-01-01 02:14:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:55.195048 | orchestrator | 2026-01-01 02:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:14:58.245597 | orchestrator | 2026-01-01 02:14:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:14:58.246795 | orchestrator | 2026-01-01 02:14:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:14:58.246895 | orchestrator | 2026-01-01 02:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:01.290205 | orchestrator | 2026-01-01 02:15:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:01.293141 | orchestrator | 2026-01-01 02:15:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:01.293545 | orchestrator | 2026-01-01 02:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:04.342999 | orchestrator | 2026-01-01 02:15:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:04.344567 | orchestrator | 2026-01-01 02:15:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:04.344889 | orchestrator | 2026-01-01 02:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:07.392512 | orchestrator | 2026-01-01 02:15:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:07.393864 | orchestrator | 2026-01-01 02:15:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:07.393965 | orchestrator | 2026-01-01 02:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:10.446623 | orchestrator | 2026-01-01 02:15:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:10.448235 | orchestrator | 2026-01-01 02:15:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:10.448304 | orchestrator | 2026-01-01 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:13.490578 | orchestrator | 2026-01-01 02:15:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:13.492810 | orchestrator | 2026-01-01 02:15:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:13.494094 | orchestrator | 2026-01-01 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:16.536682 | orchestrator | 2026-01-01 02:15:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:16.538625 | orchestrator | 2026-01-01 02:15:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:16.538693 | orchestrator | 2026-01-01 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:19.589658 | orchestrator | 2026-01-01 02:15:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:19.591039 | orchestrator | 2026-01-01 02:15:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:19.591147 | orchestrator | 2026-01-01 02:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:22.637530 | orchestrator | 2026-01-01 02:15:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:22.639172 | orchestrator | 2026-01-01 02:15:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:22.639254 | orchestrator | 2026-01-01 02:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:25.694903 | orchestrator | 2026-01-01 02:15:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:25.697114 | orchestrator | 2026-01-01 02:15:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:25.697238 | orchestrator | 2026-01-01 02:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:28.749584 | orchestrator | 2026-01-01 02:15:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:28.751300 | orchestrator | 2026-01-01 02:15:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:28.751467 | orchestrator | 2026-01-01 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:31.796949 | orchestrator | 2026-01-01 02:15:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:31.799103 | orchestrator | 2026-01-01 02:15:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:31.799175 | orchestrator | 2026-01-01 02:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:34.842542 | orchestrator | 2026-01-01 02:15:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:34.844749 | orchestrator | 2026-01-01 02:15:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:34.844873 | orchestrator | 2026-01-01 02:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:37.888922 | orchestrator | 2026-01-01 02:15:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:37.891296 | orchestrator | 2026-01-01 02:15:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:37.891458 | orchestrator | 2026-01-01 02:15:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:40.947639 | orchestrator | 2026-01-01 02:15:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:40.949094 | orchestrator | 2026-01-01 02:15:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:40.949117 | orchestrator | 2026-01-01 02:15:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:43.993068 | orchestrator | 2026-01-01 02:15:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:43.997189 | orchestrator | 2026-01-01 02:15:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:43.997388 | orchestrator | 2026-01-01 02:15:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:47.045511 | orchestrator | 2026-01-01 02:15:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:47.047175 | orchestrator | 2026-01-01 02:15:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:47.047226 | orchestrator | 2026-01-01 02:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:50.098117 | orchestrator | 2026-01-01 02:15:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:50.099366 | orchestrator | 2026-01-01 02:15:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:50.099502 | orchestrator | 2026-01-01 02:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:53.144556 | orchestrator | 2026-01-01 02:15:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:53.144738 | orchestrator | 2026-01-01 02:15:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:53.144761 | orchestrator | 2026-01-01 02:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:56.200287 | orchestrator | 2026-01-01 02:15:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:56.202078 | orchestrator | 2026-01-01 02:15:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:56.202129 | orchestrator | 2026-01-01 02:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:15:59.242290 | orchestrator | 2026-01-01 02:15:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:15:59.242823 | orchestrator | 2026-01-01 02:15:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:15:59.242851 | orchestrator | 2026-01-01 02:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:02.284083 | orchestrator | 2026-01-01 02:16:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:02.286348 | orchestrator | 2026-01-01 02:16:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:02.286404 | orchestrator | 2026-01-01 02:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:05.332683 | orchestrator | 2026-01-01 02:16:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:05.333573 | orchestrator | 2026-01-01 02:16:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:05.333623 | orchestrator | 2026-01-01 02:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:08.387675 | orchestrator | 2026-01-01 02:16:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:08.389364 | orchestrator | 2026-01-01 02:16:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:08.389435 | orchestrator | 2026-01-01 02:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:11.436785 | orchestrator | 2026-01-01 02:16:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:11.438529 | orchestrator | 2026-01-01 02:16:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:11.438576 | orchestrator | 2026-01-01 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:14.488630 | orchestrator | 2026-01-01 02:16:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:14.490157 | orchestrator | 2026-01-01 02:16:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:14.490207 | orchestrator | 2026-01-01 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:17.538767 | orchestrator | 2026-01-01 02:16:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:17.540676 | orchestrator | 2026-01-01 02:16:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:17.540750 | orchestrator | 2026-01-01 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:20.593532 | orchestrator | 2026-01-01 02:16:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:20.595164 | orchestrator | 2026-01-01 02:16:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:20.595382 | orchestrator | 2026-01-01 02:16:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:23.637449 | orchestrator | 2026-01-01 02:16:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:23.639578 | orchestrator | 2026-01-01 02:16:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:23.639743 | orchestrator | 2026-01-01 02:16:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:26.684000 | orchestrator | 2026-01-01 02:16:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:26.686099 | orchestrator | 2026-01-01 02:16:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:26.686166 | orchestrator | 2026-01-01 02:16:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:29.740221 | orchestrator | 2026-01-01 02:16:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:29.742757 | orchestrator | 2026-01-01 02:16:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:29.742789 | orchestrator | 2026-01-01 02:16:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:32.797214 | orchestrator | 2026-01-01 02:16:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:32.799467 | orchestrator | 2026-01-01 02:16:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:32.799504 | orchestrator | 2026-01-01 02:16:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:35.848030 | orchestrator | 2026-01-01 02:16:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:35.849662 | orchestrator | 2026-01-01 02:16:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:35.849749 | orchestrator | 2026-01-01 02:16:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:38.897589 | orchestrator | 2026-01-01 02:16:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:38.899849 | orchestrator | 2026-01-01 02:16:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:38.899948 | orchestrator | 2026-01-01 02:16:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:41.948527 | orchestrator | 2026-01-01 02:16:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:41.951913 | orchestrator | 2026-01-01 02:16:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:41.952330 | orchestrator | 2026-01-01 02:16:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:45.011947 | orchestrator | 2026-01-01 02:16:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:45.013828 | orchestrator | 2026-01-01 02:16:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:45.014448 | orchestrator | 2026-01-01 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:48.064855 | orchestrator | 2026-01-01 02:16:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:48.066236 | orchestrator | 2026-01-01 02:16:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:48.066487 | orchestrator | 2026-01-01 02:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:51.117019 | orchestrator | 2026-01-01 02:16:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:51.118683 | orchestrator | 2026-01-01 02:16:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:51.118724 | orchestrator | 2026-01-01 02:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:54.161366 | orchestrator | 2026-01-01 02:16:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:54.163572 | orchestrator | 2026-01-01 02:16:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:54.163686 | orchestrator | 2026-01-01 02:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:16:57.216274 | orchestrator | 2026-01-01 02:16:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:16:57.219202 | orchestrator | 2026-01-01 02:16:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:16:57.219253 | orchestrator | 2026-01-01 02:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:00.269617 | orchestrator | 2026-01-01 02:17:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:00.271871 | orchestrator | 2026-01-01 02:17:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:00.272154 | orchestrator | 2026-01-01 02:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:03.322264 | orchestrator | 2026-01-01 02:17:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:03.325626 | orchestrator | 2026-01-01 02:17:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:03.325678 | orchestrator | 2026-01-01 02:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:06.379192 | orchestrator | 2026-01-01 02:17:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:06.381850 | orchestrator | 2026-01-01 02:17:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:06.381884 | orchestrator | 2026-01-01 02:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:09.432434 | orchestrator | 2026-01-01 02:17:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:09.433392 | orchestrator | 2026-01-01 02:17:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:09.433590 | orchestrator | 2026-01-01 02:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:12.477690 | orchestrator | 2026-01-01 02:17:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:12.479114 | orchestrator | 2026-01-01 02:17:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:12.479152 | orchestrator | 2026-01-01 02:17:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:15.532924 | orchestrator | 2026-01-01 02:17:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:15.535297 | orchestrator | 2026-01-01 02:17:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:15.535543 | orchestrator | 2026-01-01 02:17:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:18.585203 | orchestrator | 2026-01-01 02:17:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:18.586793 | orchestrator | 2026-01-01 02:17:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:18.586845 | orchestrator | 2026-01-01 02:17:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:21.638656 | orchestrator | 2026-01-01 02:17:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:21.640373 | orchestrator | 2026-01-01 02:17:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:21.640751 | orchestrator | 2026-01-01 02:17:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:24.694802 | orchestrator | 2026-01-01 02:17:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:24.696368 | orchestrator | 2026-01-01 02:17:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:24.696417 | orchestrator | 2026-01-01 02:17:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:27.753595 | orchestrator | 2026-01-01 02:17:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:27.756500 | orchestrator | 2026-01-01 02:17:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:27.756566 | orchestrator | 2026-01-01 02:17:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:30.815471 | orchestrator | 2026-01-01 02:17:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:30.817102 | orchestrator | 2026-01-01 02:17:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:30.817141 | orchestrator | 2026-01-01 02:17:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:33.870388 | orchestrator | 2026-01-01 02:17:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:33.872185 | orchestrator | 2026-01-01 02:17:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:33.872219 | orchestrator | 2026-01-01 02:17:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:36.916556 | orchestrator | 2026-01-01 02:17:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:36.917878 | orchestrator | 2026-01-01 02:17:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:36.917951 | orchestrator | 2026-01-01 02:17:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:39.971368 | orchestrator | 2026-01-01 02:17:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:39.973696 | orchestrator | 2026-01-01 02:17:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:39.973749 | orchestrator | 2026-01-01 02:17:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:43.028155 | orchestrator | 2026-01-01 02:17:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:43.030192 | orchestrator | 2026-01-01 02:17:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:43.030410 | orchestrator | 2026-01-01 02:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:46.072463 | orchestrator | 2026-01-01 02:17:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:46.074216 | orchestrator | 2026-01-01 02:17:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:46.074250 | orchestrator | 2026-01-01 02:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:49.123857 | orchestrator | 2026-01-01 02:17:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:49.125882 | orchestrator | 2026-01-01 02:17:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:49.125965 | orchestrator | 2026-01-01 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:52.178921 | orchestrator | 2026-01-01 02:17:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:52.180500 | orchestrator | 2026-01-01 02:17:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:52.180548 | orchestrator | 2026-01-01 02:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:55.231993 | orchestrator | 2026-01-01 02:17:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:55.233208 | orchestrator | 2026-01-01 02:17:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:55.233247 | orchestrator | 2026-01-01 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:17:58.282475 | orchestrator | 2026-01-01 02:17:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:17:58.284591 | orchestrator | 2026-01-01 02:17:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:17:58.284721 | orchestrator | 2026-01-01 02:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:01.326327 | orchestrator | 2026-01-01 02:18:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:01.327405 | orchestrator | 2026-01-01 02:18:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:01.327481 | orchestrator | 2026-01-01 02:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:04.370966 | orchestrator | 2026-01-01 02:18:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:04.372239 | orchestrator | 2026-01-01 02:18:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:04.372407 | orchestrator | 2026-01-01 02:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:07.424047 | orchestrator | 2026-01-01 02:18:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:07.425648 | orchestrator | 2026-01-01 02:18:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:07.425683 | orchestrator | 2026-01-01 02:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:10.473311 | orchestrator | 2026-01-01 02:18:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:10.475913 | orchestrator | 2026-01-01 02:18:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:10.476152 | orchestrator | 2026-01-01 02:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:13.524457 | orchestrator | 2026-01-01 02:18:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:13.525152 | orchestrator | 2026-01-01 02:18:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:13.525207 | orchestrator | 2026-01-01 02:18:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:16.577824 | orchestrator | 2026-01-01 02:18:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:16.579685 | orchestrator | 2026-01-01 02:18:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:16.580078 | orchestrator | 2026-01-01 02:18:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:19.624826 | orchestrator | 2026-01-01 02:18:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:19.629596 | orchestrator | 2026-01-01 02:18:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:19.629678 | orchestrator | 2026-01-01 02:18:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:22.689482 | orchestrator | 2026-01-01 02:18:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:22.691254 | orchestrator | 2026-01-01 02:18:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:22.691509 | orchestrator | 2026-01-01 02:18:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:25.736182 | orchestrator | 2026-01-01 02:18:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:25.739379 | orchestrator | 2026-01-01 02:18:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:25.739435 | orchestrator | 2026-01-01 02:18:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:28.790917 | orchestrator | 2026-01-01 02:18:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:28.791107 | orchestrator | 2026-01-01 02:18:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:28.791599 | orchestrator | 2026-01-01 02:18:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:31.838981 | orchestrator | 2026-01-01 02:18:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:31.839939 | orchestrator | 2026-01-01 02:18:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:31.840017 | orchestrator | 2026-01-01 02:18:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:34.882649 | orchestrator | 2026-01-01 02:18:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:34.883535 | orchestrator | 2026-01-01 02:18:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:34.883578 | orchestrator | 2026-01-01 02:18:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:37.928748 | orchestrator | 2026-01-01 02:18:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:37.929712 | orchestrator | 2026-01-01 02:18:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:37.929756 | orchestrator | 2026-01-01 02:18:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:40.979892 | orchestrator | 2026-01-01 02:18:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:40.981235 | orchestrator | 2026-01-01 02:18:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:40.981278 | orchestrator | 2026-01-01 02:18:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:44.041172 | orchestrator | 2026-01-01 02:18:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:44.044348 | orchestrator | 2026-01-01 02:18:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:44.044460 | orchestrator | 2026-01-01 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:47.094254 | orchestrator | 2026-01-01 02:18:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:47.095712 | orchestrator | 2026-01-01 02:18:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:47.095746 | orchestrator | 2026-01-01 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:50.135587 | orchestrator | 2026-01-01 02:18:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:50.136748 | orchestrator | 2026-01-01 02:18:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:50.136804 | orchestrator | 2026-01-01 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:53.183077 | orchestrator | 2026-01-01 02:18:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:53.184915 | orchestrator | 2026-01-01 02:18:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:53.184953 | orchestrator | 2026-01-01 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:56.224531 | orchestrator | 2026-01-01 02:18:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:56.226707 | orchestrator | 2026-01-01 02:18:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:56.226751 | orchestrator | 2026-01-01 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:18:59.273431 | orchestrator | 2026-01-01 02:18:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:18:59.275511 | orchestrator | 2026-01-01 02:18:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:18:59.275604 | orchestrator | 2026-01-01 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:02.319777 | orchestrator | 2026-01-01 02:19:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:02.321786 | orchestrator | 2026-01-01 02:19:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:02.321854 | orchestrator | 2026-01-01 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:05.363180 | orchestrator | 2026-01-01 02:19:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:05.368120 | orchestrator | 2026-01-01 02:19:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:05.368490 | orchestrator | 2026-01-01 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:08.416133 | orchestrator | 2026-01-01 02:19:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:08.418271 | orchestrator | 2026-01-01 02:19:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:08.418315 | orchestrator | 2026-01-01 02:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:11.470651 | orchestrator | 2026-01-01 02:19:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:11.473937 | orchestrator | 2026-01-01 02:19:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:11.474147 | orchestrator | 2026-01-01 02:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:14.524618 | orchestrator | 2026-01-01 02:19:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:14.524799 | orchestrator | 2026-01-01 02:19:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:14.524992 | orchestrator | 2026-01-01 02:19:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:17.572409 | orchestrator | 2026-01-01 02:19:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:17.573841 | orchestrator | 2026-01-01 02:19:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:17.573889 | orchestrator | 2026-01-01 02:19:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:20.621030 | orchestrator | 2026-01-01 02:19:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:20.622708 | orchestrator | 2026-01-01 02:19:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:20.622746 | orchestrator | 2026-01-01 02:19:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:23.664029 | orchestrator | 2026-01-01 02:19:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:23.664571 | orchestrator | 2026-01-01 02:19:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:23.664602 | orchestrator | 2026-01-01 02:19:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:26.711900 | orchestrator | 2026-01-01 02:19:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:26.713087 | orchestrator | 2026-01-01 02:19:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:26.713360 | orchestrator | 2026-01-01 02:19:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:29.747664 | orchestrator | 2026-01-01 02:19:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:29.748322 | orchestrator | 2026-01-01 02:19:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:29.748479 | orchestrator | 2026-01-01 02:19:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:32.794518 | orchestrator | 2026-01-01 02:19:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:32.796112 | orchestrator | 2026-01-01 02:19:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:32.796170 | orchestrator | 2026-01-01 02:19:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:35.840124 | orchestrator | 2026-01-01 02:19:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:35.841619 | orchestrator | 2026-01-01 02:19:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:35.841655 | orchestrator | 2026-01-01 02:19:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:38.891999 | orchestrator | 2026-01-01 02:19:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:38.893886 | orchestrator | 2026-01-01 02:19:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:38.893987 | orchestrator | 2026-01-01 02:19:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:41.935609 | orchestrator | 2026-01-01 02:19:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:41.937115 | orchestrator | 2026-01-01 02:19:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:41.937168 | orchestrator | 2026-01-01 02:19:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:44.989736 | orchestrator | 2026-01-01 02:19:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:44.991639 | orchestrator | 2026-01-01 02:19:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:44.991679 | orchestrator | 2026-01-01 02:19:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:48.055089 | orchestrator | 2026-01-01 02:19:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:48.057216 | orchestrator | 2026-01-01 02:19:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:48.057268 | orchestrator | 2026-01-01 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:51.104605 | orchestrator | 2026-01-01 02:19:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:51.107310 | orchestrator | 2026-01-01 02:19:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:51.107404 | orchestrator | 2026-01-01 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:54.146719 | orchestrator | 2026-01-01 02:19:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:54.148913 | orchestrator | 2026-01-01 02:19:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:54.148986 | orchestrator | 2026-01-01 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:19:57.198866 | orchestrator | 2026-01-01 02:19:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:19:57.201771 | orchestrator | 2026-01-01 02:19:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:19:57.201844 | orchestrator | 2026-01-01 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:00.253280 | orchestrator | 2026-01-01 02:20:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:00.255721 | orchestrator | 2026-01-01 02:20:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:00.256400 | orchestrator | 2026-01-01 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:03.305956 | orchestrator | 2026-01-01 02:20:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:03.308071 | orchestrator | 2026-01-01 02:20:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:03.308105 | orchestrator | 2026-01-01 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:06.356898 | orchestrator | 2026-01-01 02:20:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:06.359248 | orchestrator | 2026-01-01 02:20:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:06.359284 | orchestrator | 2026-01-01 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:09.403755 | orchestrator | 2026-01-01 02:20:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:09.406431 | orchestrator | 2026-01-01 02:20:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:09.406538 | orchestrator | 2026-01-01 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:12.451231 | orchestrator | 2026-01-01 02:20:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:12.453796 | orchestrator | 2026-01-01 02:20:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:12.453857 | orchestrator | 2026-01-01 02:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:15.495614 | orchestrator | 2026-01-01 02:20:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:15.497878 | orchestrator | 2026-01-01 02:20:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:15.497937 | orchestrator | 2026-01-01 02:20:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:18.547277 | orchestrator | 2026-01-01 02:20:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:18.549073 | orchestrator | 2026-01-01 02:20:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:18.549110 | orchestrator | 2026-01-01 02:20:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:21.591987 | orchestrator | 2026-01-01 02:20:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:21.593287 | orchestrator | 2026-01-01 02:20:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:21.593341 | orchestrator | 2026-01-01 02:20:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:24.643613 | orchestrator | 2026-01-01 02:20:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:24.646247 | orchestrator | 2026-01-01 02:20:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:24.646304 | orchestrator | 2026-01-01 02:20:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:27.691703 | orchestrator | 2026-01-01 02:20:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:27.693044 | orchestrator | 2026-01-01 02:20:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:27.693132 | orchestrator | 2026-01-01 02:20:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:30.742314 | orchestrator | 2026-01-01 02:20:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:30.743612 | orchestrator | 2026-01-01 02:20:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:30.743668 | orchestrator | 2026-01-01 02:20:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:33.788492 | orchestrator | 2026-01-01 02:20:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:33.790643 | orchestrator | 2026-01-01 02:20:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:33.791272 | orchestrator | 2026-01-01 02:20:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:36.841961 | orchestrator | 2026-01-01 02:20:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:36.844433 | orchestrator | 2026-01-01 02:20:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:36.844510 | orchestrator | 2026-01-01 02:20:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:39.888036 | orchestrator | 2026-01-01 02:20:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:39.890306 | orchestrator | 2026-01-01 02:20:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:39.890352 | orchestrator | 2026-01-01 02:20:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:42.937106 | orchestrator | 2026-01-01 02:20:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:42.938458 | orchestrator | 2026-01-01 02:20:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:42.938505 | orchestrator | 2026-01-01 02:20:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:45.985792 | orchestrator | 2026-01-01 02:20:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:45.988782 | orchestrator | 2026-01-01 02:20:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:45.988843 | orchestrator | 2026-01-01 02:20:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:49.039286 | orchestrator | 2026-01-01 02:20:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:49.042278 | orchestrator | 2026-01-01 02:20:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:49.042357 | orchestrator | 2026-01-01 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:52.085438 | orchestrator | 2026-01-01 02:20:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:52.088712 | orchestrator | 2026-01-01 02:20:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:52.088792 | orchestrator | 2026-01-01 02:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:55.138124 | orchestrator | 2026-01-01 02:20:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:55.139851 | orchestrator | 2026-01-01 02:20:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:55.139895 | orchestrator | 2026-01-01 02:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:20:58.194854 | orchestrator | 2026-01-01 02:20:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:20:58.196142 | orchestrator | 2026-01-01 02:20:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:20:58.196392 | orchestrator | 2026-01-01 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:01.244344 | orchestrator | 2026-01-01 02:21:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:01.248592 | orchestrator | 2026-01-01 02:21:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:01.248745 | orchestrator | 2026-01-01 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:04.307834 | orchestrator | 2026-01-01 02:21:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:04.310189 | orchestrator | 2026-01-01 02:21:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:04.310550 | orchestrator | 2026-01-01 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:07.363925 | orchestrator | 2026-01-01 02:21:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:07.367129 | orchestrator | 2026-01-01 02:21:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:07.367197 | orchestrator | 2026-01-01 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:10.418922 | orchestrator | 2026-01-01 02:21:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:10.421205 | orchestrator | 2026-01-01 02:21:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:10.421239 | orchestrator | 2026-01-01 02:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:13.467537 | orchestrator | 2026-01-01 02:21:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:13.468325 | orchestrator | 2026-01-01 02:21:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:13.468687 | orchestrator | 2026-01-01 02:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:16.529889 | orchestrator | 2026-01-01 02:21:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:16.533045 | orchestrator | 2026-01-01 02:21:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:16.533120 | orchestrator | 2026-01-01 02:21:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:19.592137 | orchestrator | 2026-01-01 02:21:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:19.594687 | orchestrator | 2026-01-01 02:21:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:19.594744 | orchestrator | 2026-01-01 02:21:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:22.647563 | orchestrator | 2026-01-01 02:21:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:22.650598 | orchestrator | 2026-01-01 02:21:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:22.650678 | orchestrator | 2026-01-01 02:21:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:25.711019 | orchestrator | 2026-01-01 02:21:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:25.714267 | orchestrator | 2026-01-01 02:21:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:25.714320 | orchestrator | 2026-01-01 02:21:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:28.764954 | orchestrator | 2026-01-01 02:21:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:28.767748 | orchestrator | 2026-01-01 02:21:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:28.767827 | orchestrator | 2026-01-01 02:21:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:31.819657 | orchestrator | 2026-01-01 02:21:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:31.822177 | orchestrator | 2026-01-01 02:21:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:31.822231 | orchestrator | 2026-01-01 02:21:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:34.869023 | orchestrator | 2026-01-01 02:21:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:34.870353 | orchestrator | 2026-01-01 02:21:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:34.870394 | orchestrator | 2026-01-01 02:21:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:37.925582 | orchestrator | 2026-01-01 02:21:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:37.927481 | orchestrator | 2026-01-01 02:21:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:37.927516 | orchestrator | 2026-01-01 02:21:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:40.994878 | orchestrator | 2026-01-01 02:21:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:40.997983 | orchestrator | 2026-01-01 02:21:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:40.998101 | orchestrator | 2026-01-01 02:21:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:44.036270 | orchestrator | 2026-01-01 02:21:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:44.037312 | orchestrator | 2026-01-01 02:21:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:44.037398 | orchestrator | 2026-01-01 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:47.105712 | orchestrator | 2026-01-01 02:21:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:47.108103 | orchestrator | 2026-01-01 02:21:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:47.108146 | orchestrator | 2026-01-01 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:50.146872 | orchestrator | 2026-01-01 02:21:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:50.147578 | orchestrator | 2026-01-01 02:21:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:50.147710 | orchestrator | 2026-01-01 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:53.185791 | orchestrator | 2026-01-01 02:21:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:53.186096 | orchestrator | 2026-01-01 02:21:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:53.186122 | orchestrator | 2026-01-01 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:56.227720 | orchestrator | 2026-01-01 02:21:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:56.230751 | orchestrator | 2026-01-01 02:21:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:56.230803 | orchestrator | 2026-01-01 02:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:21:59.280799 | orchestrator | 2026-01-01 02:21:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:21:59.282900 | orchestrator | 2026-01-01 02:21:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:21:59.282967 | orchestrator | 2026-01-01 02:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:02.334351 | orchestrator | 2026-01-01 02:22:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:02.336590 | orchestrator | 2026-01-01 02:22:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:02.336640 | orchestrator | 2026-01-01 02:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:05.378528 | orchestrator | 2026-01-01 02:22:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:05.379453 | orchestrator | 2026-01-01 02:22:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:05.379482 | orchestrator | 2026-01-01 02:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:08.422194 | orchestrator | 2026-01-01 02:22:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:08.422936 | orchestrator | 2026-01-01 02:22:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:08.423036 | orchestrator | 2026-01-01 02:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:11.469659 | orchestrator | 2026-01-01 02:22:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:11.470551 | orchestrator | 2026-01-01 02:22:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:11.470589 | orchestrator | 2026-01-01 02:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:14.517006 | orchestrator | 2026-01-01 02:22:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:14.518298 | orchestrator | 2026-01-01 02:22:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:14.518324 | orchestrator | 2026-01-01 02:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:17.562849 | orchestrator | 2026-01-01 02:22:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:17.565288 | orchestrator | 2026-01-01 02:22:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:17.565330 | orchestrator | 2026-01-01 02:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:20.621852 | orchestrator | 2026-01-01 02:22:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:20.623504 | orchestrator | 2026-01-01 02:22:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:20.623577 | orchestrator | 2026-01-01 02:22:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:23.677493 | orchestrator | 2026-01-01 02:22:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:23.678151 | orchestrator | 2026-01-01 02:22:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:23.678233 | orchestrator | 2026-01-01 02:22:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:26.728969 | orchestrator | 2026-01-01 02:22:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:26.731902 | orchestrator | 2026-01-01 02:22:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:26.731982 | orchestrator | 2026-01-01 02:22:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:29.784918 | orchestrator | 2026-01-01 02:22:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:29.787836 | orchestrator | 2026-01-01 02:22:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:29.787912 | orchestrator | 2026-01-01 02:22:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:32.841859 | orchestrator | 2026-01-01 02:22:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:32.846975 | orchestrator | 2026-01-01 02:22:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:32.847052 | orchestrator | 2026-01-01 02:22:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:35.894537 | orchestrator | 2026-01-01 02:22:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:35.896847 | orchestrator | 2026-01-01 02:22:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:35.896900 | orchestrator | 2026-01-01 02:22:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:38.942703 | orchestrator | 2026-01-01 02:22:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:38.944594 | orchestrator | 2026-01-01 02:22:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:38.944647 | orchestrator | 2026-01-01 02:22:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:41.992610 | orchestrator | 2026-01-01 02:22:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:41.994098 | orchestrator | 2026-01-01 02:22:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:41.994186 | orchestrator | 2026-01-01 02:22:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:45.042883 | orchestrator | 2026-01-01 02:22:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:45.044540 | orchestrator | 2026-01-01 02:22:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:45.044577 | orchestrator | 2026-01-01 02:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:48.094614 | orchestrator | 2026-01-01 02:22:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:48.096791 | orchestrator | 2026-01-01 02:22:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:48.096861 | orchestrator | 2026-01-01 02:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:51.143754 | orchestrator | 2026-01-01 02:22:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:51.145728 | orchestrator | 2026-01-01 02:22:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:51.145758 | orchestrator | 2026-01-01 02:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:54.190590 | orchestrator | 2026-01-01 02:22:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:54.192004 | orchestrator | 2026-01-01 02:22:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:54.192174 | orchestrator | 2026-01-01 02:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:22:57.246320 | orchestrator | 2026-01-01 02:22:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:22:57.248226 | orchestrator | 2026-01-01 02:22:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:22:57.248288 | orchestrator | 2026-01-01 02:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:00.295061 | orchestrator | 2026-01-01 02:23:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:00.296633 | orchestrator | 2026-01-01 02:23:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:00.296661 | orchestrator | 2026-01-01 02:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:03.348907 | orchestrator | 2026-01-01 02:23:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:03.350221 | orchestrator | 2026-01-01 02:23:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:03.350309 | orchestrator | 2026-01-01 02:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:06.406253 | orchestrator | 2026-01-01 02:23:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:06.408600 | orchestrator | 2026-01-01 02:23:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:06.408631 | orchestrator | 2026-01-01 02:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:09.458983 | orchestrator | 2026-01-01 02:23:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:09.462358 | orchestrator | 2026-01-01 02:23:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:09.462563 | orchestrator | 2026-01-01 02:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:12.508901 | orchestrator | 2026-01-01 02:23:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:12.510273 | orchestrator | 2026-01-01 02:23:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:12.510360 | orchestrator | 2026-01-01 02:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:15.568866 | orchestrator | 2026-01-01 02:23:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:15.570949 | orchestrator | 2026-01-01 02:23:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:15.570996 | orchestrator | 2026-01-01 02:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:18.622289 | orchestrator | 2026-01-01 02:23:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:18.624830 | orchestrator | 2026-01-01 02:23:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:18.625072 | orchestrator | 2026-01-01 02:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:21.665878 | orchestrator | 2026-01-01 02:23:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:21.666697 | orchestrator | 2026-01-01 02:23:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:21.666722 | orchestrator | 2026-01-01 02:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:24.714243 | orchestrator | 2026-01-01 02:23:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:24.715334 | orchestrator | 2026-01-01 02:23:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:24.715418 | orchestrator | 2026-01-01 02:23:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:27.768538 | orchestrator | 2026-01-01 02:23:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:27.772342 | orchestrator | 2026-01-01 02:23:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:27.772483 | orchestrator | 2026-01-01 02:23:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:30.828160 | orchestrator | 2026-01-01 02:23:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:30.829917 | orchestrator | 2026-01-01 02:23:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:30.829995 | orchestrator | 2026-01-01 02:23:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:33.884885 | orchestrator | 2026-01-01 02:23:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:33.887151 | orchestrator | 2026-01-01 02:23:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:33.887216 | orchestrator | 2026-01-01 02:23:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:36.941095 | orchestrator | 2026-01-01 02:23:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:36.943190 | orchestrator | 2026-01-01 02:23:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:36.943221 | orchestrator | 2026-01-01 02:23:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:39.997902 | orchestrator | 2026-01-01 02:23:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:40.000636 | orchestrator | 2026-01-01 02:23:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:40.000703 | orchestrator | 2026-01-01 02:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:43.050125 | orchestrator | 2026-01-01 02:23:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:43.050583 | orchestrator | 2026-01-01 02:23:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:43.050620 | orchestrator | 2026-01-01 02:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:46.091800 | orchestrator | 2026-01-01 02:23:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:46.092916 | orchestrator | 2026-01-01 02:23:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:46.092956 | orchestrator | 2026-01-01 02:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:49.143002 | orchestrator | 2026-01-01 02:23:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:49.145664 | orchestrator | 2026-01-01 02:23:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:49.145716 | orchestrator | 2026-01-01 02:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:52.199591 | orchestrator | 2026-01-01 02:23:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:52.203037 | orchestrator | 2026-01-01 02:23:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:52.203728 | orchestrator | 2026-01-01 02:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:55.256721 | orchestrator | 2026-01-01 02:23:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:55.260361 | orchestrator | 2026-01-01 02:23:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:55.260395 | orchestrator | 2026-01-01 02:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:23:58.312018 | orchestrator | 2026-01-01 02:23:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:23:58.314556 | orchestrator | 2026-01-01 02:23:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:23:58.314613 | orchestrator | 2026-01-01 02:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:01.360789 | orchestrator | 2026-01-01 02:24:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:01.363872 | orchestrator | 2026-01-01 02:24:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:01.363913 | orchestrator | 2026-01-01 02:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:04.412535 | orchestrator | 2026-01-01 02:24:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:04.415283 | orchestrator | 2026-01-01 02:24:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:04.415333 | orchestrator | 2026-01-01 02:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:07.468102 | orchestrator | 2026-01-01 02:24:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:07.472017 | orchestrator | 2026-01-01 02:24:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:07.472097 | orchestrator | 2026-01-01 02:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:10.524232 | orchestrator | 2026-01-01 02:24:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:10.526061 | orchestrator | 2026-01-01 02:24:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:10.526111 | orchestrator | 2026-01-01 02:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:13.582546 | orchestrator | 2026-01-01 02:24:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:13.586240 | orchestrator | 2026-01-01 02:24:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:13.586300 | orchestrator | 2026-01-01 02:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:16.628715 | orchestrator | 2026-01-01 02:24:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:16.630771 | orchestrator | 2026-01-01 02:24:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:16.630863 | orchestrator | 2026-01-01 02:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:19.677504 | orchestrator | 2026-01-01 02:24:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:19.678508 | orchestrator | 2026-01-01 02:24:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:19.678547 | orchestrator | 2026-01-01 02:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:22.723977 | orchestrator | 2026-01-01 02:24:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:22.725522 | orchestrator | 2026-01-01 02:24:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:22.725646 | orchestrator | 2026-01-01 02:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:25.774930 | orchestrator | 2026-01-01 02:24:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:25.777202 | orchestrator | 2026-01-01 02:24:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:25.777270 | orchestrator | 2026-01-01 02:24:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:28.825407 | orchestrator | 2026-01-01 02:24:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:28.827788 | orchestrator | 2026-01-01 02:24:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:28.827838 | orchestrator | 2026-01-01 02:24:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:31.878807 | orchestrator | 2026-01-01 02:24:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:31.881262 | orchestrator | 2026-01-01 02:24:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:31.881326 | orchestrator | 2026-01-01 02:24:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:34.935280 | orchestrator | 2026-01-01 02:24:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:34.937194 | orchestrator | 2026-01-01 02:24:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:34.937344 | orchestrator | 2026-01-01 02:24:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:37.987298 | orchestrator | 2026-01-01 02:24:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:37.990165 | orchestrator | 2026-01-01 02:24:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:37.990243 | orchestrator | 2026-01-01 02:24:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:41.049342 | orchestrator | 2026-01-01 02:24:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:41.049477 | orchestrator | 2026-01-01 02:24:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:41.049539 | orchestrator | 2026-01-01 02:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:44.100538 | orchestrator | 2026-01-01 02:24:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:44.102665 | orchestrator | 2026-01-01 02:24:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:44.102715 | orchestrator | 2026-01-01 02:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:47.150979 | orchestrator | 2026-01-01 02:24:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:47.152766 | orchestrator | 2026-01-01 02:24:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:47.152825 | orchestrator | 2026-01-01 02:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:50.201657 | orchestrator | 2026-01-01 02:24:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:50.203770 | orchestrator | 2026-01-01 02:24:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:50.203820 | orchestrator | 2026-01-01 02:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:53.256574 | orchestrator | 2026-01-01 02:24:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:53.257573 | orchestrator | 2026-01-01 02:24:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:53.257892 | orchestrator | 2026-01-01 02:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:56.306745 | orchestrator | 2026-01-01 02:24:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:56.309035 | orchestrator | 2026-01-01 02:24:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:56.309183 | orchestrator | 2026-01-01 02:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:24:59.358117 | orchestrator | 2026-01-01 02:24:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:24:59.358914 | orchestrator | 2026-01-01 02:24:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:24:59.359004 | orchestrator | 2026-01-01 02:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:02.415027 | orchestrator | 2026-01-01 02:25:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:02.416676 | orchestrator | 2026-01-01 02:25:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:02.416735 | orchestrator | 2026-01-01 02:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:05.464541 | orchestrator | 2026-01-01 02:25:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:05.465948 | orchestrator | 2026-01-01 02:25:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:05.466127 | orchestrator | 2026-01-01 02:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:08.510374 | orchestrator | 2026-01-01 02:25:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:08.512330 | orchestrator | 2026-01-01 02:25:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:08.512377 | orchestrator | 2026-01-01 02:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:11.552964 | orchestrator | 2026-01-01 02:25:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:11.559337 | orchestrator | 2026-01-01 02:25:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:11.559715 | orchestrator | 2026-01-01 02:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:14.604194 | orchestrator | 2026-01-01 02:25:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:14.605636 | orchestrator | 2026-01-01 02:25:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:14.606207 | orchestrator | 2026-01-01 02:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:17.660364 | orchestrator | 2026-01-01 02:25:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:17.661168 | orchestrator | 2026-01-01 02:25:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:17.661257 | orchestrator | 2026-01-01 02:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:20.715624 | orchestrator | 2026-01-01 02:25:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:20.718624 | orchestrator | 2026-01-01 02:25:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:20.718671 | orchestrator | 2026-01-01 02:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:23.760385 | orchestrator | 2026-01-01 02:25:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:23.761194 | orchestrator | 2026-01-01 02:25:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:23.761229 | orchestrator | 2026-01-01 02:25:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:26.798246 | orchestrator | 2026-01-01 02:25:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:26.799562 | orchestrator | 2026-01-01 02:25:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:26.799597 | orchestrator | 2026-01-01 02:25:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:29.843406 | orchestrator | 2026-01-01 02:25:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:29.845410 | orchestrator | 2026-01-01 02:25:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:29.845457 | orchestrator | 2026-01-01 02:25:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:32.895610 | orchestrator | 2026-01-01 02:25:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:32.896111 | orchestrator | 2026-01-01 02:25:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:32.896146 | orchestrator | 2026-01-01 02:25:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:35.931504 | orchestrator | 2026-01-01 02:25:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:35.932453 | orchestrator | 2026-01-01 02:25:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:35.932502 | orchestrator | 2026-01-01 02:25:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:38.970350 | orchestrator | 2026-01-01 02:25:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:38.972464 | orchestrator | 2026-01-01 02:25:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:38.972633 | orchestrator | 2026-01-01 02:25:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:42.020735 | orchestrator | 2026-01-01 02:25:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:42.022791 | orchestrator | 2026-01-01 02:25:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:42.022849 | orchestrator | 2026-01-01 02:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:45.072990 | orchestrator | 2026-01-01 02:25:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:45.074983 | orchestrator | 2026-01-01 02:25:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:45.075036 | orchestrator | 2026-01-01 02:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:48.120315 | orchestrator | 2026-01-01 02:25:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:48.122304 | orchestrator | 2026-01-01 02:25:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:48.122379 | orchestrator | 2026-01-01 02:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:51.173343 | orchestrator | 2026-01-01 02:25:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:51.175014 | orchestrator | 2026-01-01 02:25:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:51.175074 | orchestrator | 2026-01-01 02:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:54.221613 | orchestrator | 2026-01-01 02:25:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:54.223777 | orchestrator | 2026-01-01 02:25:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:54.223836 | orchestrator | 2026-01-01 02:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:25:57.279088 | orchestrator | 2026-01-01 02:25:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:25:57.282268 | orchestrator | 2026-01-01 02:25:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:25:57.282405 | orchestrator | 2026-01-01 02:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:00.326751 | orchestrator | 2026-01-01 02:26:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:00.328689 | orchestrator | 2026-01-01 02:26:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:00.328751 | orchestrator | 2026-01-01 02:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:03.383154 | orchestrator | 2026-01-01 02:26:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:03.386310 | orchestrator | 2026-01-01 02:26:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:03.386400 | orchestrator | 2026-01-01 02:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:06.436552 | orchestrator | 2026-01-01 02:26:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:06.440583 | orchestrator | 2026-01-01 02:26:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:06.440652 | orchestrator | 2026-01-01 02:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:09.481731 | orchestrator | 2026-01-01 02:26:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:09.485104 | orchestrator | 2026-01-01 02:26:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:09.485189 | orchestrator | 2026-01-01 02:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:12.529167 | orchestrator | 2026-01-01 02:26:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:12.530815 | orchestrator | 2026-01-01 02:26:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:12.530855 | orchestrator | 2026-01-01 02:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:15.576674 | orchestrator | 2026-01-01 02:26:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:15.578075 | orchestrator | 2026-01-01 02:26:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:15.578116 | orchestrator | 2026-01-01 02:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:18.621601 | orchestrator | 2026-01-01 02:26:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:18.622634 | orchestrator | 2026-01-01 02:26:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:18.622666 | orchestrator | 2026-01-01 02:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:21.682300 | orchestrator | 2026-01-01 02:26:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:21.685030 | orchestrator | 2026-01-01 02:26:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:21.685069 | orchestrator | 2026-01-01 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:24.722253 | orchestrator | 2026-01-01 02:26:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:24.725298 | orchestrator | 2026-01-01 02:26:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:24.725470 | orchestrator | 2026-01-01 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:27.775643 | orchestrator | 2026-01-01 02:26:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:27.777747 | orchestrator | 2026-01-01 02:26:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:27.777807 | orchestrator | 2026-01-01 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:30.828272 | orchestrator | 2026-01-01 02:26:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:30.830869 | orchestrator | 2026-01-01 02:26:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:30.831096 | orchestrator | 2026-01-01 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:33.886885 | orchestrator | 2026-01-01 02:26:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:33.888524 | orchestrator | 2026-01-01 02:26:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:33.888582 | orchestrator | 2026-01-01 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:36.946495 | orchestrator | 2026-01-01 02:26:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:36.948816 | orchestrator | 2026-01-01 02:26:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:36.948872 | orchestrator | 2026-01-01 02:26:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:40.004780 | orchestrator | 2026-01-01 02:26:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:40.006905 | orchestrator | 2026-01-01 02:26:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:40.006945 | orchestrator | 2026-01-01 02:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:43.057669 | orchestrator | 2026-01-01 02:26:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:43.059603 | orchestrator | 2026-01-01 02:26:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:43.059697 | orchestrator | 2026-01-01 02:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:46.111751 | orchestrator | 2026-01-01 02:26:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:46.115065 | orchestrator | 2026-01-01 02:26:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:46.115181 | orchestrator | 2026-01-01 02:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:49.165925 | orchestrator | 2026-01-01 02:26:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:49.168795 | orchestrator | 2026-01-01 02:26:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:49.168875 | orchestrator | 2026-01-01 02:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:52.228570 | orchestrator | 2026-01-01 02:26:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:52.232043 | orchestrator | 2026-01-01 02:26:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:52.232086 | orchestrator | 2026-01-01 02:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:55.283200 | orchestrator | 2026-01-01 02:26:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:55.285156 | orchestrator | 2026-01-01 02:26:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:55.285243 | orchestrator | 2026-01-01 02:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:26:58.336273 | orchestrator | 2026-01-01 02:26:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:26:58.337130 | orchestrator | 2026-01-01 02:26:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:26:58.337146 | orchestrator | 2026-01-01 02:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:01.388870 | orchestrator | 2026-01-01 02:27:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:01.391742 | orchestrator | 2026-01-01 02:27:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:01.391801 | orchestrator | 2026-01-01 02:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:04.439483 | orchestrator | 2026-01-01 02:27:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:04.442371 | orchestrator | 2026-01-01 02:27:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:04.442507 | orchestrator | 2026-01-01 02:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:07.497745 | orchestrator | 2026-01-01 02:27:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:07.499818 | orchestrator | 2026-01-01 02:27:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:07.499863 | orchestrator | 2026-01-01 02:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:10.554848 | orchestrator | 2026-01-01 02:27:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:10.556119 | orchestrator | 2026-01-01 02:27:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:10.556166 | orchestrator | 2026-01-01 02:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:13.609031 | orchestrator | 2026-01-01 02:27:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:13.611542 | orchestrator | 2026-01-01 02:27:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:13.611589 | orchestrator | 2026-01-01 02:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:16.656163 | orchestrator | 2026-01-01 02:27:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:16.657909 | orchestrator | 2026-01-01 02:27:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:16.658086 | orchestrator | 2026-01-01 02:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:19.717788 | orchestrator | 2026-01-01 02:27:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:19.719799 | orchestrator | 2026-01-01 02:27:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:19.719860 | orchestrator | 2026-01-01 02:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:22.759374 | orchestrator | 2026-01-01 02:27:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:22.761805 | orchestrator | 2026-01-01 02:27:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:22.761841 | orchestrator | 2026-01-01 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:25.811095 | orchestrator | 2026-01-01 02:27:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:25.813107 | orchestrator | 2026-01-01 02:27:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:25.813477 | orchestrator | 2026-01-01 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:28.867714 | orchestrator | 2026-01-01 02:27:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:28.868993 | orchestrator | 2026-01-01 02:27:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:28.869040 | orchestrator | 2026-01-01 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:31.919861 | orchestrator | 2026-01-01 02:27:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:31.921886 | orchestrator | 2026-01-01 02:27:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:31.922121 | orchestrator | 2026-01-01 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:34.969891 | orchestrator | 2026-01-01 02:27:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:34.973280 | orchestrator | 2026-01-01 02:27:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:34.973354 | orchestrator | 2026-01-01 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:38.023715 | orchestrator | 2026-01-01 02:27:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:38.023837 | orchestrator | 2026-01-01 02:27:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:38.023867 | orchestrator | 2026-01-01 02:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:41.073652 | orchestrator | 2026-01-01 02:27:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:41.074734 | orchestrator | 2026-01-01 02:27:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:41.074794 | orchestrator | 2026-01-01 02:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:44.125903 | orchestrator | 2026-01-01 02:27:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:44.127439 | orchestrator | 2026-01-01 02:27:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:44.127474 | orchestrator | 2026-01-01 02:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:47.177720 | orchestrator | 2026-01-01 02:27:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:47.179717 | orchestrator | 2026-01-01 02:27:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:47.179769 | orchestrator | 2026-01-01 02:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:50.225521 | orchestrator | 2026-01-01 02:27:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:50.227593 | orchestrator | 2026-01-01 02:27:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:50.227672 | orchestrator | 2026-01-01 02:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:53.272814 | orchestrator | 2026-01-01 02:27:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:53.275789 | orchestrator | 2026-01-01 02:27:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:53.275840 | orchestrator | 2026-01-01 02:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:56.323936 | orchestrator | 2026-01-01 02:27:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:56.325256 | orchestrator | 2026-01-01 02:27:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:56.325298 | orchestrator | 2026-01-01 02:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:27:59.376987 | orchestrator | 2026-01-01 02:27:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:27:59.378886 | orchestrator | 2026-01-01 02:27:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:27:59.378926 | orchestrator | 2026-01-01 02:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:02.436418 | orchestrator | 2026-01-01 02:28:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:02.438635 | orchestrator | 2026-01-01 02:28:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:02.438703 | orchestrator | 2026-01-01 02:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:05.491098 | orchestrator | 2026-01-01 02:28:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:05.493103 | orchestrator | 2026-01-01 02:28:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:05.493178 | orchestrator | 2026-01-01 02:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:08.548697 | orchestrator | 2026-01-01 02:28:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:08.549415 | orchestrator | 2026-01-01 02:28:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:08.549577 | orchestrator | 2026-01-01 02:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:11.609623 | orchestrator | 2026-01-01 02:28:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:11.609964 | orchestrator | 2026-01-01 02:28:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:11.609988 | orchestrator | 2026-01-01 02:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:14.666147 | orchestrator | 2026-01-01 02:28:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:14.669094 | orchestrator | 2026-01-01 02:28:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:14.669478 | orchestrator | 2026-01-01 02:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:17.721225 | orchestrator | 2026-01-01 02:28:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:17.723017 | orchestrator | 2026-01-01 02:28:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:17.723034 | orchestrator | 2026-01-01 02:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:20.777506 | orchestrator | 2026-01-01 02:28:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:20.780186 | orchestrator | 2026-01-01 02:28:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:20.780201 | orchestrator | 2026-01-01 02:28:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:23.833806 | orchestrator | 2026-01-01 02:28:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:23.836331 | orchestrator | 2026-01-01 02:28:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:23.836568 | orchestrator | 2026-01-01 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:26.890296 | orchestrator | 2026-01-01 02:28:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:26.892180 | orchestrator | 2026-01-01 02:28:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:26.892296 | orchestrator | 2026-01-01 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:29.947928 | orchestrator | 2026-01-01 02:28:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:29.951079 | orchestrator | 2026-01-01 02:28:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:29.951275 | orchestrator | 2026-01-01 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:33.007914 | orchestrator | 2026-01-01 02:28:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:33.011508 | orchestrator | 2026-01-01 02:28:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:33.011593 | orchestrator | 2026-01-01 02:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:36.069845 | orchestrator | 2026-01-01 02:28:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:36.074702 | orchestrator | 2026-01-01 02:28:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:36.074773 | orchestrator | 2026-01-01 02:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:39.125666 | orchestrator | 2026-01-01 02:28:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:39.129244 | orchestrator | 2026-01-01 02:28:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:39.129331 | orchestrator | 2026-01-01 02:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:42.186071 | orchestrator | 2026-01-01 02:28:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:42.189097 | orchestrator | 2026-01-01 02:28:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:42.189166 | orchestrator | 2026-01-01 02:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:45.243171 | orchestrator | 2026-01-01 02:28:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:45.247942 | orchestrator | 2026-01-01 02:28:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:45.248132 | orchestrator | 2026-01-01 02:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:48.301782 | orchestrator | 2026-01-01 02:28:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:48.303845 | orchestrator | 2026-01-01 02:28:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:48.303884 | orchestrator | 2026-01-01 02:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:51.353545 | orchestrator | 2026-01-01 02:28:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:51.355446 | orchestrator | 2026-01-01 02:28:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:51.355487 | orchestrator | 2026-01-01 02:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:54.407227 | orchestrator | 2026-01-01 02:28:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:54.408337 | orchestrator | 2026-01-01 02:28:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:54.408405 | orchestrator | 2026-01-01 02:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:28:57.460292 | orchestrator | 2026-01-01 02:28:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:28:57.462642 | orchestrator | 2026-01-01 02:28:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:28:57.462694 | orchestrator | 2026-01-01 02:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:00.516856 | orchestrator | 2026-01-01 02:29:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:00.519614 | orchestrator | 2026-01-01 02:29:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:00.519766 | orchestrator | 2026-01-01 02:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:03.571221 | orchestrator | 2026-01-01 02:29:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:03.572062 | orchestrator | 2026-01-01 02:29:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:03.572188 | orchestrator | 2026-01-01 02:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:06.622922 | orchestrator | 2026-01-01 02:29:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:06.624576 | orchestrator | 2026-01-01 02:29:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:06.624629 | orchestrator | 2026-01-01 02:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:09.678081 | orchestrator | 2026-01-01 02:29:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:09.681486 | orchestrator | 2026-01-01 02:29:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:09.681578 | orchestrator | 2026-01-01 02:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:12.733721 | orchestrator | 2026-01-01 02:29:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:12.734798 | orchestrator | 2026-01-01 02:29:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:12.734920 | orchestrator | 2026-01-01 02:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:15.787118 | orchestrator | 2026-01-01 02:29:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:15.789201 | orchestrator | 2026-01-01 02:29:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:15.789266 | orchestrator | 2026-01-01 02:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:18.830845 | orchestrator | 2026-01-01 02:29:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:18.832236 | orchestrator | 2026-01-01 02:29:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:18.832275 | orchestrator | 2026-01-01 02:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:21.883146 | orchestrator | 2026-01-01 02:29:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:21.885737 | orchestrator | 2026-01-01 02:29:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:21.885785 | orchestrator | 2026-01-01 02:29:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:24.941063 | orchestrator | 2026-01-01 02:29:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:24.941999 | orchestrator | 2026-01-01 02:29:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:24.942090 | orchestrator | 2026-01-01 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:27.989262 | orchestrator | 2026-01-01 02:29:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:27.991575 | orchestrator | 2026-01-01 02:29:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:27.991680 | orchestrator | 2026-01-01 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:31.038079 | orchestrator | 2026-01-01 02:29:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:31.040908 | orchestrator | 2026-01-01 02:29:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:31.040948 | orchestrator | 2026-01-01 02:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:34.090239 | orchestrator | 2026-01-01 02:29:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:34.092712 | orchestrator | 2026-01-01 02:29:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:34.092789 | orchestrator | 2026-01-01 02:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:37.149724 | orchestrator | 2026-01-01 02:29:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:37.151944 | orchestrator | 2026-01-01 02:29:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:37.151996 | orchestrator | 2026-01-01 02:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:40.201515 | orchestrator | 2026-01-01 02:29:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:40.204265 | orchestrator | 2026-01-01 02:29:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:40.204314 | orchestrator | 2026-01-01 02:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:43.258572 | orchestrator | 2026-01-01 02:29:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:43.260111 | orchestrator | 2026-01-01 02:29:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:43.260227 | orchestrator | 2026-01-01 02:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:46.313251 | orchestrator | 2026-01-01 02:29:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:46.315954 | orchestrator | 2026-01-01 02:29:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:46.316012 | orchestrator | 2026-01-01 02:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:49.369057 | orchestrator | 2026-01-01 02:29:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:49.371138 | orchestrator | 2026-01-01 02:29:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:49.371236 | orchestrator | 2026-01-01 02:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:52.421789 | orchestrator | 2026-01-01 02:29:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:52.424224 | orchestrator | 2026-01-01 02:29:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:52.424331 | orchestrator | 2026-01-01 02:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:55.477272 | orchestrator | 2026-01-01 02:29:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:55.478178 | orchestrator | 2026-01-01 02:29:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:55.478237 | orchestrator | 2026-01-01 02:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:29:58.531190 | orchestrator | 2026-01-01 02:29:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:29:58.532620 | orchestrator | 2026-01-01 02:29:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:29:58.532661 | orchestrator | 2026-01-01 02:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:01.581866 | orchestrator | 2026-01-01 02:30:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:01.584416 | orchestrator | 2026-01-01 02:30:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:01.584598 | orchestrator | 2026-01-01 02:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:04.632766 | orchestrator | 2026-01-01 02:30:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:04.633208 | orchestrator | 2026-01-01 02:30:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:04.633241 | orchestrator | 2026-01-01 02:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:07.675615 | orchestrator | 2026-01-01 02:30:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:07.677840 | orchestrator | 2026-01-01 02:30:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:07.677884 | orchestrator | 2026-01-01 02:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:10.726064 | orchestrator | 2026-01-01 02:30:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:10.727424 | orchestrator | 2026-01-01 02:30:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:10.727474 | orchestrator | 2026-01-01 02:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:13.778945 | orchestrator | 2026-01-01 02:30:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:13.782296 | orchestrator | 2026-01-01 02:30:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:13.782388 | orchestrator | 2026-01-01 02:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:16.822848 | orchestrator | 2026-01-01 02:30:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:16.824830 | orchestrator | 2026-01-01 02:30:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:16.824975 | orchestrator | 2026-01-01 02:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:19.875889 | orchestrator | 2026-01-01 02:30:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:19.877588 | orchestrator | 2026-01-01 02:30:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:19.877643 | orchestrator | 2026-01-01 02:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:22.923725 | orchestrator | 2026-01-01 02:30:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:22.925212 | orchestrator | 2026-01-01 02:30:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:22.925277 | orchestrator | 2026-01-01 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:25.981084 | orchestrator | 2026-01-01 02:30:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:25.983487 | orchestrator | 2026-01-01 02:30:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:25.983546 | orchestrator | 2026-01-01 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:29.039316 | orchestrator | 2026-01-01 02:30:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:29.041835 | orchestrator | 2026-01-01 02:30:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:29.041951 | orchestrator | 2026-01-01 02:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:32.092191 | orchestrator | 2026-01-01 02:30:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:32.093520 | orchestrator | 2026-01-01 02:30:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:32.093545 | orchestrator | 2026-01-01 02:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:35.146213 | orchestrator | 2026-01-01 02:30:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:35.149786 | orchestrator | 2026-01-01 02:30:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:35.150093 | orchestrator | 2026-01-01 02:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:38.201695 | orchestrator | 2026-01-01 02:30:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:38.202604 | orchestrator | 2026-01-01 02:30:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:38.202845 | orchestrator | 2026-01-01 02:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:41.253068 | orchestrator | 2026-01-01 02:30:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:41.255518 | orchestrator | 2026-01-01 02:30:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:41.255548 | orchestrator | 2026-01-01 02:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:44.304314 | orchestrator | 2026-01-01 02:30:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:44.307923 | orchestrator | 2026-01-01 02:30:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:44.308096 | orchestrator | 2026-01-01 02:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:47.352132 | orchestrator | 2026-01-01 02:30:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:47.353113 | orchestrator | 2026-01-01 02:30:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:47.353455 | orchestrator | 2026-01-01 02:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:50.407967 | orchestrator | 2026-01-01 02:30:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:50.410556 | orchestrator | 2026-01-01 02:30:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:50.410603 | orchestrator | 2026-01-01 02:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:53.468545 | orchestrator | 2026-01-01 02:30:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:53.470463 | orchestrator | 2026-01-01 02:30:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:53.470887 | orchestrator | 2026-01-01 02:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:56.520800 | orchestrator | 2026-01-01 02:30:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:56.521687 | orchestrator | 2026-01-01 02:30:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:56.521734 | orchestrator | 2026-01-01 02:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:30:59.573825 | orchestrator | 2026-01-01 02:30:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:30:59.575183 | orchestrator | 2026-01-01 02:30:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:30:59.575427 | orchestrator | 2026-01-01 02:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:02.629412 | orchestrator | 2026-01-01 02:31:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:02.631661 | orchestrator | 2026-01-01 02:31:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:02.631857 | orchestrator | 2026-01-01 02:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:05.684772 | orchestrator | 2026-01-01 02:31:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:05.687318 | orchestrator | 2026-01-01 02:31:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:05.687447 | orchestrator | 2026-01-01 02:31:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:08.735393 | orchestrator | 2026-01-01 02:31:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:08.737117 | orchestrator | 2026-01-01 02:31:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:08.737256 | orchestrator | 2026-01-01 02:31:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:11.785751 | orchestrator | 2026-01-01 02:31:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:11.787800 | orchestrator | 2026-01-01 02:31:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:11.787954 | orchestrator | 2026-01-01 02:31:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:14.841749 | orchestrator | 2026-01-01 02:31:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:14.844836 | orchestrator | 2026-01-01 02:31:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:14.844925 | orchestrator | 2026-01-01 02:31:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:17.891898 | orchestrator | 2026-01-01 02:31:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:17.893595 | orchestrator | 2026-01-01 02:31:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:17.893648 | orchestrator | 2026-01-01 02:31:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:20.940646 | orchestrator | 2026-01-01 02:31:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:20.941589 | orchestrator | 2026-01-01 02:31:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:20.941968 | orchestrator | 2026-01-01 02:31:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:23.999689 | orchestrator | 2026-01-01 02:31:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:24.002409 | orchestrator | 2026-01-01 02:31:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:24.002455 | orchestrator | 2026-01-01 02:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:27.055150 | orchestrator | 2026-01-01 02:31:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:27.058276 | orchestrator | 2026-01-01 02:31:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:27.058388 | orchestrator | 2026-01-01 02:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:30.107769 | orchestrator | 2026-01-01 02:31:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:30.109557 | orchestrator | 2026-01-01 02:31:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:30.109611 | orchestrator | 2026-01-01 02:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:33.156277 | orchestrator | 2026-01-01 02:31:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:33.157002 | orchestrator | 2026-01-01 02:31:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:33.157036 | orchestrator | 2026-01-01 02:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:36.207302 | orchestrator | 2026-01-01 02:31:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:36.209029 | orchestrator | 2026-01-01 02:31:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:36.209105 | orchestrator | 2026-01-01 02:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:39.261210 | orchestrator | 2026-01-01 02:31:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:39.262957 | orchestrator | 2026-01-01 02:31:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:39.263003 | orchestrator | 2026-01-01 02:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:42.319903 | orchestrator | 2026-01-01 02:31:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:42.323279 | orchestrator | 2026-01-01 02:31:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:42.323385 | orchestrator | 2026-01-01 02:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:45.378394 | orchestrator | 2026-01-01 02:31:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:45.380474 | orchestrator | 2026-01-01 02:31:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:45.380794 | orchestrator | 2026-01-01 02:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:48.428908 | orchestrator | 2026-01-01 02:31:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:48.431540 | orchestrator | 2026-01-01 02:31:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:48.431673 | orchestrator | 2026-01-01 02:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:51.489437 | orchestrator | 2026-01-01 02:31:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:51.492469 | orchestrator | 2026-01-01 02:31:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:51.492542 | orchestrator | 2026-01-01 02:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:54.539269 | orchestrator | 2026-01-01 02:31:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:54.541798 | orchestrator | 2026-01-01 02:31:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:54.541925 | orchestrator | 2026-01-01 02:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:31:57.589830 | orchestrator | 2026-01-01 02:31:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:31:57.591914 | orchestrator | 2026-01-01 02:31:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:31:57.591983 | orchestrator | 2026-01-01 02:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:00.639953 | orchestrator | 2026-01-01 02:32:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:00.640476 | orchestrator | 2026-01-01 02:32:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:00.640503 | orchestrator | 2026-01-01 02:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:03.688941 | orchestrator | 2026-01-01 02:32:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:03.690199 | orchestrator | 2026-01-01 02:32:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:03.690227 | orchestrator | 2026-01-01 02:32:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:06.741791 | orchestrator | 2026-01-01 02:32:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:06.745099 | orchestrator | 2026-01-01 02:32:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:06.745173 | orchestrator | 2026-01-01 02:32:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:09.788597 | orchestrator | 2026-01-01 02:32:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:09.790592 | orchestrator | 2026-01-01 02:32:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:09.790636 | orchestrator | 2026-01-01 02:32:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:12.844099 | orchestrator | 2026-01-01 02:32:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:12.845543 | orchestrator | 2026-01-01 02:32:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:12.845590 | orchestrator | 2026-01-01 02:32:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:15.899823 | orchestrator | 2026-01-01 02:32:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:15.902554 | orchestrator | 2026-01-01 02:32:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:15.902666 | orchestrator | 2026-01-01 02:32:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:18.958189 | orchestrator | 2026-01-01 02:32:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:18.960569 | orchestrator | 2026-01-01 02:32:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:18.960633 | orchestrator | 2026-01-01 02:32:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:22.015464 | orchestrator | 2026-01-01 02:32:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:22.017359 | orchestrator | 2026-01-01 02:32:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:22.017388 | orchestrator | 2026-01-01 02:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:25.071763 | orchestrator | 2026-01-01 02:32:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:25.075363 | orchestrator | 2026-01-01 02:32:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:25.075456 | orchestrator | 2026-01-01 02:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:28.130696 | orchestrator | 2026-01-01 02:32:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:28.134718 | orchestrator | 2026-01-01 02:32:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:28.134793 | orchestrator | 2026-01-01 02:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:31.184455 | orchestrator | 2026-01-01 02:32:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:31.185692 | orchestrator | 2026-01-01 02:32:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:31.185777 | orchestrator | 2026-01-01 02:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:34.232537 | orchestrator | 2026-01-01 02:32:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:34.235179 | orchestrator | 2026-01-01 02:32:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:34.235234 | orchestrator | 2026-01-01 02:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:37.283034 | orchestrator | 2026-01-01 02:32:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:37.284974 | orchestrator | 2026-01-01 02:32:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:37.285011 | orchestrator | 2026-01-01 02:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:40.326608 | orchestrator | 2026-01-01 02:32:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:40.328268 | orchestrator | 2026-01-01 02:32:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:40.328453 | orchestrator | 2026-01-01 02:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:43.379698 | orchestrator | 2026-01-01 02:32:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:43.382375 | orchestrator | 2026-01-01 02:32:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:43.382803 | orchestrator | 2026-01-01 02:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:46.441063 | orchestrator | 2026-01-01 02:32:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:46.443666 | orchestrator | 2026-01-01 02:32:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:46.443743 | orchestrator | 2026-01-01 02:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:49.494054 | orchestrator | 2026-01-01 02:32:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:49.496325 | orchestrator | 2026-01-01 02:32:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:49.496397 | orchestrator | 2026-01-01 02:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:52.549250 | orchestrator | 2026-01-01 02:32:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:52.552223 | orchestrator | 2026-01-01 02:32:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:52.552279 | orchestrator | 2026-01-01 02:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:55.597795 | orchestrator | 2026-01-01 02:32:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:55.599020 | orchestrator | 2026-01-01 02:32:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:55.599068 | orchestrator | 2026-01-01 02:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:32:58.644140 | orchestrator | 2026-01-01 02:32:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:32:58.644556 | orchestrator | 2026-01-01 02:32:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:32:58.644597 | orchestrator | 2026-01-01 02:32:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:01.690097 | orchestrator | 2026-01-01 02:33:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:01.691754 | orchestrator | 2026-01-01 02:33:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:01.691795 | orchestrator | 2026-01-01 02:33:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:04.742245 | orchestrator | 2026-01-01 02:33:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:04.744590 | orchestrator | 2026-01-01 02:33:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:04.744669 | orchestrator | 2026-01-01 02:33:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:07.799396 | orchestrator | 2026-01-01 02:33:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:07.801527 | orchestrator | 2026-01-01 02:33:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:07.801562 | orchestrator | 2026-01-01 02:33:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:10.857653 | orchestrator | 2026-01-01 02:33:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:10.859530 | orchestrator | 2026-01-01 02:33:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:10.859578 | orchestrator | 2026-01-01 02:33:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:13.901819 | orchestrator | 2026-01-01 02:33:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:13.903748 | orchestrator | 2026-01-01 02:33:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:13.903827 | orchestrator | 2026-01-01 02:33:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:16.950131 | orchestrator | 2026-01-01 02:33:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:16.950798 | orchestrator | 2026-01-01 02:33:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:16.950932 | orchestrator | 2026-01-01 02:33:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:20.005684 | orchestrator | 2026-01-01 02:33:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:20.007521 | orchestrator | 2026-01-01 02:33:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:20.007576 | orchestrator | 2026-01-01 02:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:23.053837 | orchestrator | 2026-01-01 02:33:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:23.058424 | orchestrator | 2026-01-01 02:33:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:23.058479 | orchestrator | 2026-01-01 02:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:26.101374 | orchestrator | 2026-01-01 02:33:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:26.102610 | orchestrator | 2026-01-01 02:33:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:26.102645 | orchestrator | 2026-01-01 02:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:29.147752 | orchestrator | 2026-01-01 02:33:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:29.151113 | orchestrator | 2026-01-01 02:33:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:29.151171 | orchestrator | 2026-01-01 02:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:32.206068 | orchestrator | 2026-01-01 02:33:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:32.207689 | orchestrator | 2026-01-01 02:33:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:32.207734 | orchestrator | 2026-01-01 02:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:35.261500 | orchestrator | 2026-01-01 02:33:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:35.262043 | orchestrator | 2026-01-01 02:33:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:35.262371 | orchestrator | 2026-01-01 02:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:38.307781 | orchestrator | 2026-01-01 02:33:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:38.310770 | orchestrator | 2026-01-01 02:33:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:38.310854 | orchestrator | 2026-01-01 02:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:41.362531 | orchestrator | 2026-01-01 02:33:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:41.365933 | orchestrator | 2026-01-01 02:33:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:41.366004 | orchestrator | 2026-01-01 02:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:44.412472 | orchestrator | 2026-01-01 02:33:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:44.413778 | orchestrator | 2026-01-01 02:33:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:44.413823 | orchestrator | 2026-01-01 02:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:47.459203 | orchestrator | 2026-01-01 02:33:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:47.460789 | orchestrator | 2026-01-01 02:33:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:47.460834 | orchestrator | 2026-01-01 02:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:50.503644 | orchestrator | 2026-01-01 02:33:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:50.506100 | orchestrator | 2026-01-01 02:33:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:50.506158 | orchestrator | 2026-01-01 02:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:53.563463 | orchestrator | 2026-01-01 02:33:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:53.565742 | orchestrator | 2026-01-01 02:33:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:53.565788 | orchestrator | 2026-01-01 02:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:56.620821 | orchestrator | 2026-01-01 02:33:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:56.622815 | orchestrator | 2026-01-01 02:33:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:56.622925 | orchestrator | 2026-01-01 02:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:33:59.672404 | orchestrator | 2026-01-01 02:33:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:33:59.674470 | orchestrator | 2026-01-01 02:33:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:33:59.674523 | orchestrator | 2026-01-01 02:33:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:02.722732 | orchestrator | 2026-01-01 02:34:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:02.724935 | orchestrator | 2026-01-01 02:34:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:02.724966 | orchestrator | 2026-01-01 02:34:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:05.769836 | orchestrator | 2026-01-01 02:34:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:05.771077 | orchestrator | 2026-01-01 02:34:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:05.771108 | orchestrator | 2026-01-01 02:34:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:08.828006 | orchestrator | 2026-01-01 02:34:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:08.829358 | orchestrator | 2026-01-01 02:34:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:08.829440 | orchestrator | 2026-01-01 02:34:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:11.876463 | orchestrator | 2026-01-01 02:34:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:11.878980 | orchestrator | 2026-01-01 02:34:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:11.879063 | orchestrator | 2026-01-01 02:34:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:14.924048 | orchestrator | 2026-01-01 02:34:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:14.925588 | orchestrator | 2026-01-01 02:34:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:14.925675 | orchestrator | 2026-01-01 02:34:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:17.982688 | orchestrator | 2026-01-01 02:34:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:17.984566 | orchestrator | 2026-01-01 02:34:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:17.984619 | orchestrator | 2026-01-01 02:34:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:21.033202 | orchestrator | 2026-01-01 02:34:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:21.034380 | orchestrator | 2026-01-01 02:34:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:21.034453 | orchestrator | 2026-01-01 02:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:24.077995 | orchestrator | 2026-01-01 02:34:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:24.080370 | orchestrator | 2026-01-01 02:34:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:24.080421 | orchestrator | 2026-01-01 02:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:27.122927 | orchestrator | 2026-01-01 02:34:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:27.124091 | orchestrator | 2026-01-01 02:34:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:27.124164 | orchestrator | 2026-01-01 02:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:30.170564 | orchestrator | 2026-01-01 02:34:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:30.172521 | orchestrator | 2026-01-01 02:34:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:30.172559 | orchestrator | 2026-01-01 02:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:33.209565 | orchestrator | 2026-01-01 02:34:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:33.209998 | orchestrator | 2026-01-01 02:34:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:33.210678 | orchestrator | 2026-01-01 02:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:36.261134 | orchestrator | 2026-01-01 02:34:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:36.262965 | orchestrator | 2026-01-01 02:34:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:36.263085 | orchestrator | 2026-01-01 02:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:39.308098 | orchestrator | 2026-01-01 02:34:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:39.310372 | orchestrator | 2026-01-01 02:34:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:39.310420 | orchestrator | 2026-01-01 02:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:42.360905 | orchestrator | 2026-01-01 02:34:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:42.364319 | orchestrator | 2026-01-01 02:34:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:42.364377 | orchestrator | 2026-01-01 02:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:45.416875 | orchestrator | 2026-01-01 02:34:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:45.420101 | orchestrator | 2026-01-01 02:34:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:45.420150 | orchestrator | 2026-01-01 02:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:48.467181 | orchestrator | 2026-01-01 02:34:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:48.470553 | orchestrator | 2026-01-01 02:34:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:48.470623 | orchestrator | 2026-01-01 02:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:51.517186 | orchestrator | 2026-01-01 02:34:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:51.517696 | orchestrator | 2026-01-01 02:34:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:51.517734 | orchestrator | 2026-01-01 02:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:54.570325 | orchestrator | 2026-01-01 02:34:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:54.570416 | orchestrator | 2026-01-01 02:34:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:54.570427 | orchestrator | 2026-01-01 02:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:34:57.605954 | orchestrator | 2026-01-01 02:34:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:34:57.608466 | orchestrator | 2026-01-01 02:34:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:34:57.608609 | orchestrator | 2026-01-01 02:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:00.656312 | orchestrator | 2026-01-01 02:35:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:00.660546 | orchestrator | 2026-01-01 02:35:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:00.660628 | orchestrator | 2026-01-01 02:35:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:03.699925 | orchestrator | 2026-01-01 02:35:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:03.700974 | orchestrator | 2026-01-01 02:35:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:03.701015 | orchestrator | 2026-01-01 02:35:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:06.745231 | orchestrator | 2026-01-01 02:35:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:06.746200 | orchestrator | 2026-01-01 02:35:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:06.746392 | orchestrator | 2026-01-01 02:35:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:09.804927 | orchestrator | 2026-01-01 02:35:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:09.809632 | orchestrator | 2026-01-01 02:35:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:09.809855 | orchestrator | 2026-01-01 02:35:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:12.860858 | orchestrator | 2026-01-01 02:35:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:12.863376 | orchestrator | 2026-01-01 02:35:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:12.863441 | orchestrator | 2026-01-01 02:35:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:15.918433 | orchestrator | 2026-01-01 02:35:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:15.920699 | orchestrator | 2026-01-01 02:35:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:15.921023 | orchestrator | 2026-01-01 02:35:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:18.972338 | orchestrator | 2026-01-01 02:35:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:18.973754 | orchestrator | 2026-01-01 02:35:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:18.973800 | orchestrator | 2026-01-01 02:35:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:22.035942 | orchestrator | 2026-01-01 02:35:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:22.037916 | orchestrator | 2026-01-01 02:35:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:22.037969 | orchestrator | 2026-01-01 02:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:25.082660 | orchestrator | 2026-01-01 02:35:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:25.084231 | orchestrator | 2026-01-01 02:35:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:25.084265 | orchestrator | 2026-01-01 02:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:28.132585 | orchestrator | 2026-01-01 02:35:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:28.134385 | orchestrator | 2026-01-01 02:35:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:28.134428 | orchestrator | 2026-01-01 02:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:31.185957 | orchestrator | 2026-01-01 02:35:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:31.188470 | orchestrator | 2026-01-01 02:35:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:31.188532 | orchestrator | 2026-01-01 02:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:34.245602 | orchestrator | 2026-01-01 02:35:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:34.246262 | orchestrator | 2026-01-01 02:35:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:34.246304 | orchestrator | 2026-01-01 02:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:37.306764 | orchestrator | 2026-01-01 02:35:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:37.309121 | orchestrator | 2026-01-01 02:35:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:37.309183 | orchestrator | 2026-01-01 02:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:40.361900 | orchestrator | 2026-01-01 02:35:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:40.363728 | orchestrator | 2026-01-01 02:35:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:40.363756 | orchestrator | 2026-01-01 02:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:43.411557 | orchestrator | 2026-01-01 02:35:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:43.413536 | orchestrator | 2026-01-01 02:35:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:43.413584 | orchestrator | 2026-01-01 02:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:46.464666 | orchestrator | 2026-01-01 02:35:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:46.466297 | orchestrator | 2026-01-01 02:35:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:46.466390 | orchestrator | 2026-01-01 02:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:49.518686 | orchestrator | 2026-01-01 02:35:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:49.521906 | orchestrator | 2026-01-01 02:35:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:49.522131 | orchestrator | 2026-01-01 02:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:52.571853 | orchestrator | 2026-01-01 02:35:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:52.572073 | orchestrator | 2026-01-01 02:35:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:52.572714 | orchestrator | 2026-01-01 02:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:55.633595 | orchestrator | 2026-01-01 02:35:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:55.635771 | orchestrator | 2026-01-01 02:35:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:55.636173 | orchestrator | 2026-01-01 02:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:35:58.691405 | orchestrator | 2026-01-01 02:35:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:35:58.697826 | orchestrator | 2026-01-01 02:35:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:35:58.697909 | orchestrator | 2026-01-01 02:35:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:01.746002 | orchestrator | 2026-01-01 02:36:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:01.748722 | orchestrator | 2026-01-01 02:36:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:01.748825 | orchestrator | 2026-01-01 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:04.797986 | orchestrator | 2026-01-01 02:36:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:04.801352 | orchestrator | 2026-01-01 02:36:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:04.801440 | orchestrator | 2026-01-01 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:07.852494 | orchestrator | 2026-01-01 02:36:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:07.854345 | orchestrator | 2026-01-01 02:36:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:07.854378 | orchestrator | 2026-01-01 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:10.906292 | orchestrator | 2026-01-01 02:36:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:10.907540 | orchestrator | 2026-01-01 02:36:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:10.907577 | orchestrator | 2026-01-01 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:13.955372 | orchestrator | 2026-01-01 02:36:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:13.956069 | orchestrator | 2026-01-01 02:36:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:13.956127 | orchestrator | 2026-01-01 02:36:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:16.999737 | orchestrator | 2026-01-01 02:36:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:17.001169 | orchestrator | 2026-01-01 02:36:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:17.001267 | orchestrator | 2026-01-01 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:20.051889 | orchestrator | 2026-01-01 02:36:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:20.053356 | orchestrator | 2026-01-01 02:36:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:20.053422 | orchestrator | 2026-01-01 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:23.097377 | orchestrator | 2026-01-01 02:36:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:23.097488 | orchestrator | 2026-01-01 02:36:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:23.097506 | orchestrator | 2026-01-01 02:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:26.150857 | orchestrator | 2026-01-01 02:36:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:26.152290 | orchestrator | 2026-01-01 02:36:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:26.152322 | orchestrator | 2026-01-01 02:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:29.205904 | orchestrator | 2026-01-01 02:36:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:29.208052 | orchestrator | 2026-01-01 02:36:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:29.208098 | orchestrator | 2026-01-01 02:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:32.260685 | orchestrator | 2026-01-01 02:36:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:32.262459 | orchestrator | 2026-01-01 02:36:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:32.262513 | orchestrator | 2026-01-01 02:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:35.310549 | orchestrator | 2026-01-01 02:36:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:35.312606 | orchestrator | 2026-01-01 02:36:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:35.312674 | orchestrator | 2026-01-01 02:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:38.361033 | orchestrator | 2026-01-01 02:36:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:38.363281 | orchestrator | 2026-01-01 02:36:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:38.363346 | orchestrator | 2026-01-01 02:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:41.416499 | orchestrator | 2026-01-01 02:36:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:41.418903 | orchestrator | 2026-01-01 02:36:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:41.418965 | orchestrator | 2026-01-01 02:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:44.477110 | orchestrator | 2026-01-01 02:36:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:44.480395 | orchestrator | 2026-01-01 02:36:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:44.480463 | orchestrator | 2026-01-01 02:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:47.533349 | orchestrator | 2026-01-01 02:36:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:47.534687 | orchestrator | 2026-01-01 02:36:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:47.534725 | orchestrator | 2026-01-01 02:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:50.588480 | orchestrator | 2026-01-01 02:36:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:50.590806 | orchestrator | 2026-01-01 02:36:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:50.590906 | orchestrator | 2026-01-01 02:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:53.634586 | orchestrator | 2026-01-01 02:36:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:53.636109 | orchestrator | 2026-01-01 02:36:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:53.636330 | orchestrator | 2026-01-01 02:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:56.685053 | orchestrator | 2026-01-01 02:36:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:56.686447 | orchestrator | 2026-01-01 02:36:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:56.686495 | orchestrator | 2026-01-01 02:36:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:36:59.736279 | orchestrator | 2026-01-01 02:36:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:36:59.738493 | orchestrator | 2026-01-01 02:36:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:36:59.738541 | orchestrator | 2026-01-01 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:02.785671 | orchestrator | 2026-01-01 02:37:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:02.789113 | orchestrator | 2026-01-01 02:37:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:02.789233 | orchestrator | 2026-01-01 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:05.848312 | orchestrator | 2026-01-01 02:37:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:05.851303 | orchestrator | 2026-01-01 02:37:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:05.851351 | orchestrator | 2026-01-01 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:08.900766 | orchestrator | 2026-01-01 02:37:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:08.902872 | orchestrator | 2026-01-01 02:37:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:08.903007 | orchestrator | 2026-01-01 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:11.960716 | orchestrator | 2026-01-01 02:37:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:11.963428 | orchestrator | 2026-01-01 02:37:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:11.963557 | orchestrator | 2026-01-01 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:15.022906 | orchestrator | 2026-01-01 02:37:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:15.024973 | orchestrator | 2026-01-01 02:37:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:15.025051 | orchestrator | 2026-01-01 02:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:18.076473 | orchestrator | 2026-01-01 02:37:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:18.078199 | orchestrator | 2026-01-01 02:37:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:18.078267 | orchestrator | 2026-01-01 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:21.122303 | orchestrator | 2026-01-01 02:37:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:21.124123 | orchestrator | 2026-01-01 02:37:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:21.124246 | orchestrator | 2026-01-01 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:24.172212 | orchestrator | 2026-01-01 02:37:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:24.174240 | orchestrator | 2026-01-01 02:37:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:24.174289 | orchestrator | 2026-01-01 02:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:27.223997 | orchestrator | 2026-01-01 02:37:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:27.226688 | orchestrator | 2026-01-01 02:37:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:27.226765 | orchestrator | 2026-01-01 02:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:30.277051 | orchestrator | 2026-01-01 02:37:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:30.279371 | orchestrator | 2026-01-01 02:37:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:30.279445 | orchestrator | 2026-01-01 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:33.325885 | orchestrator | 2026-01-01 02:37:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:33.327003 | orchestrator | 2026-01-01 02:37:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:33.327355 | orchestrator | 2026-01-01 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:36.385554 | orchestrator | 2026-01-01 02:37:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:36.388228 | orchestrator | 2026-01-01 02:37:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:36.388292 | orchestrator | 2026-01-01 02:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:39.438504 | orchestrator | 2026-01-01 02:37:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:39.439743 | orchestrator | 2026-01-01 02:37:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:39.439825 | orchestrator | 2026-01-01 02:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:42.495704 | orchestrator | 2026-01-01 02:37:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:42.497846 | orchestrator | 2026-01-01 02:37:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:42.497919 | orchestrator | 2026-01-01 02:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:45.551780 | orchestrator | 2026-01-01 02:37:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:45.553877 | orchestrator | 2026-01-01 02:37:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:45.553929 | orchestrator | 2026-01-01 02:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:48.604233 | orchestrator | 2026-01-01 02:37:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:48.606464 | orchestrator | 2026-01-01 02:37:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:48.606543 | orchestrator | 2026-01-01 02:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:51.657134 | orchestrator | 2026-01-01 02:37:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:51.659384 | orchestrator | 2026-01-01 02:37:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:51.659454 | orchestrator | 2026-01-01 02:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:54.718971 | orchestrator | 2026-01-01 02:37:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:54.722131 | orchestrator | 2026-01-01 02:37:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:54.722279 | orchestrator | 2026-01-01 02:37:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:37:57.777772 | orchestrator | 2026-01-01 02:37:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:37:57.781095 | orchestrator | 2026-01-01 02:37:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:37:57.781143 | orchestrator | 2026-01-01 02:37:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:00.831839 | orchestrator | 2026-01-01 02:38:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:00.833904 | orchestrator | 2026-01-01 02:38:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:00.833999 | orchestrator | 2026-01-01 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:03.887206 | orchestrator | 2026-01-01 02:38:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:03.889389 | orchestrator | 2026-01-01 02:38:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:03.889459 | orchestrator | 2026-01-01 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:06.957692 | orchestrator | 2026-01-01 02:38:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:06.962394 | orchestrator | 2026-01-01 02:38:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:06.962484 | orchestrator | 2026-01-01 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:10.015777 | orchestrator | 2026-01-01 02:38:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:10.018400 | orchestrator | 2026-01-01 02:38:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:10.018438 | orchestrator | 2026-01-01 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:13.074268 | orchestrator | 2026-01-01 02:38:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:13.076744 | orchestrator | 2026-01-01 02:38:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:13.076839 | orchestrator | 2026-01-01 02:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:16.113392 | orchestrator | 2026-01-01 02:38:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:16.114339 | orchestrator | 2026-01-01 02:38:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:16.114361 | orchestrator | 2026-01-01 02:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:19.160816 | orchestrator | 2026-01-01 02:38:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:19.162779 | orchestrator | 2026-01-01 02:38:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:19.162860 | orchestrator | 2026-01-01 02:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:22.216252 | orchestrator | 2026-01-01 02:38:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:22.217778 | orchestrator | 2026-01-01 02:38:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:22.217795 | orchestrator | 2026-01-01 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:25.269245 | orchestrator | 2026-01-01 02:38:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:25.270331 | orchestrator | 2026-01-01 02:38:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:25.270381 | orchestrator | 2026-01-01 02:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:28.328386 | orchestrator | 2026-01-01 02:38:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:28.331481 | orchestrator | 2026-01-01 02:38:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:28.332027 | orchestrator | 2026-01-01 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:31.391480 | orchestrator | 2026-01-01 02:38:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:31.392229 | orchestrator | 2026-01-01 02:38:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:31.392289 | orchestrator | 2026-01-01 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:34.440915 | orchestrator | 2026-01-01 02:38:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:34.443287 | orchestrator | 2026-01-01 02:38:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:34.443364 | orchestrator | 2026-01-01 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:37.493570 | orchestrator | 2026-01-01 02:38:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:37.494208 | orchestrator | 2026-01-01 02:38:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:37.494457 | orchestrator | 2026-01-01 02:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:40.543186 | orchestrator | 2026-01-01 02:38:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:40.544423 | orchestrator | 2026-01-01 02:38:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:40.544470 | orchestrator | 2026-01-01 02:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:43.598926 | orchestrator | 2026-01-01 02:38:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:43.600402 | orchestrator | 2026-01-01 02:38:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:43.600479 | orchestrator | 2026-01-01 02:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:46.653951 | orchestrator | 2026-01-01 02:38:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:46.656112 | orchestrator | 2026-01-01 02:38:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:46.656161 | orchestrator | 2026-01-01 02:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:49.698344 | orchestrator | 2026-01-01 02:38:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:49.700126 | orchestrator | 2026-01-01 02:38:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:49.700186 | orchestrator | 2026-01-01 02:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:52.751875 | orchestrator | 2026-01-01 02:38:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:52.752438 | orchestrator | 2026-01-01 02:38:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:52.752463 | orchestrator | 2026-01-01 02:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:55.804202 | orchestrator | 2026-01-01 02:38:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:55.805911 | orchestrator | 2026-01-01 02:38:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:55.805947 | orchestrator | 2026-01-01 02:38:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:38:58.861441 | orchestrator | 2026-01-01 02:38:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:38:58.863097 | orchestrator | 2026-01-01 02:38:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:38:58.863217 | orchestrator | 2026-01-01 02:38:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:01.914571 | orchestrator | 2026-01-01 02:39:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:01.916652 | orchestrator | 2026-01-01 02:39:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:01.916810 | orchestrator | 2026-01-01 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:04.971818 | orchestrator | 2026-01-01 02:39:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:04.974096 | orchestrator | 2026-01-01 02:39:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:04.974228 | orchestrator | 2026-01-01 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:08.024297 | orchestrator | 2026-01-01 02:39:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:08.025943 | orchestrator | 2026-01-01 02:39:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:08.027232 | orchestrator | 2026-01-01 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:11.072000 | orchestrator | 2026-01-01 02:39:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:11.072990 | orchestrator | 2026-01-01 02:39:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:11.073023 | orchestrator | 2026-01-01 02:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:14.129455 | orchestrator | 2026-01-01 02:39:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:14.131327 | orchestrator | 2026-01-01 02:39:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:14.131359 | orchestrator | 2026-01-01 02:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:17.184762 | orchestrator | 2026-01-01 02:39:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:17.185939 | orchestrator | 2026-01-01 02:39:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:17.186013 | orchestrator | 2026-01-01 02:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:20.234804 | orchestrator | 2026-01-01 02:39:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:20.236616 | orchestrator | 2026-01-01 02:39:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:20.236698 | orchestrator | 2026-01-01 02:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:23.282842 | orchestrator | 2026-01-01 02:39:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:23.285378 | orchestrator | 2026-01-01 02:39:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:23.285403 | orchestrator | 2026-01-01 02:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:26.327770 | orchestrator | 2026-01-01 02:39:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:26.328372 | orchestrator | 2026-01-01 02:39:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:26.328420 | orchestrator | 2026-01-01 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:29.371663 | orchestrator | 2026-01-01 02:39:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:29.373041 | orchestrator | 2026-01-01 02:39:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:29.373074 | orchestrator | 2026-01-01 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:32.426111 | orchestrator | 2026-01-01 02:39:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:32.427822 | orchestrator | 2026-01-01 02:39:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:32.427962 | orchestrator | 2026-01-01 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:35.480084 | orchestrator | 2026-01-01 02:39:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:35.484821 | orchestrator | 2026-01-01 02:39:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:35.484871 | orchestrator | 2026-01-01 02:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:38.539851 | orchestrator | 2026-01-01 02:39:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:38.542436 | orchestrator | 2026-01-01 02:39:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:38.542487 | orchestrator | 2026-01-01 02:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:41.589685 | orchestrator | 2026-01-01 02:39:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:41.591303 | orchestrator | 2026-01-01 02:39:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:41.591709 | orchestrator | 2026-01-01 02:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:44.639081 | orchestrator | 2026-01-01 02:39:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:44.640155 | orchestrator | 2026-01-01 02:39:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:44.640196 | orchestrator | 2026-01-01 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:47.692689 | orchestrator | 2026-01-01 02:39:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:47.694969 | orchestrator | 2026-01-01 02:39:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:47.695017 | orchestrator | 2026-01-01 02:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:50.745413 | orchestrator | 2026-01-01 02:39:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:50.747644 | orchestrator | 2026-01-01 02:39:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:50.747690 | orchestrator | 2026-01-01 02:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:53.795150 | orchestrator | 2026-01-01 02:39:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:53.796914 | orchestrator | 2026-01-01 02:39:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:53.796976 | orchestrator | 2026-01-01 02:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:56.849083 | orchestrator | 2026-01-01 02:39:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:56.851368 | orchestrator | 2026-01-01 02:39:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:56.851422 | orchestrator | 2026-01-01 02:39:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:39:59.904062 | orchestrator | 2026-01-01 02:39:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:39:59.906071 | orchestrator | 2026-01-01 02:39:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:39:59.906158 | orchestrator | 2026-01-01 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:02.954178 | orchestrator | 2026-01-01 02:40:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:02.955752 | orchestrator | 2026-01-01 02:40:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:02.955801 | orchestrator | 2026-01-01 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:05.992783 | orchestrator | 2026-01-01 02:40:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:05.994502 | orchestrator | 2026-01-01 02:40:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:05.994530 | orchestrator | 2026-01-01 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:09.042820 | orchestrator | 2026-01-01 02:40:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:09.045629 | orchestrator | 2026-01-01 02:40:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:09.045707 | orchestrator | 2026-01-01 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:12.088629 | orchestrator | 2026-01-01 02:40:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:12.090071 | orchestrator | 2026-01-01 02:40:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:12.090184 | orchestrator | 2026-01-01 02:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:15.147426 | orchestrator | 2026-01-01 02:40:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:15.148799 | orchestrator | 2026-01-01 02:40:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:15.148851 | orchestrator | 2026-01-01 02:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:18.201891 | orchestrator | 2026-01-01 02:40:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:18.202845 | orchestrator | 2026-01-01 02:40:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:18.202880 | orchestrator | 2026-01-01 02:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:21.254987 | orchestrator | 2026-01-01 02:40:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:21.256456 | orchestrator | 2026-01-01 02:40:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:21.256534 | orchestrator | 2026-01-01 02:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:24.307217 | orchestrator | 2026-01-01 02:40:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:24.310718 | orchestrator | 2026-01-01 02:40:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:24.310775 | orchestrator | 2026-01-01 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:27.359269 | orchestrator | 2026-01-01 02:40:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:27.361454 | orchestrator | 2026-01-01 02:40:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:27.361498 | orchestrator | 2026-01-01 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:30.425681 | orchestrator | 2026-01-01 02:40:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:30.427195 | orchestrator | 2026-01-01 02:40:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:30.427223 | orchestrator | 2026-01-01 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:33.476671 | orchestrator | 2026-01-01 02:40:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:33.479140 | orchestrator | 2026-01-01 02:40:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:33.479220 | orchestrator | 2026-01-01 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:36.528123 | orchestrator | 2026-01-01 02:40:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:36.529739 | orchestrator | 2026-01-01 02:40:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:36.529813 | orchestrator | 2026-01-01 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:39.570918 | orchestrator | 2026-01-01 02:40:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:39.573214 | orchestrator | 2026-01-01 02:40:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:39.573360 | orchestrator | 2026-01-01 02:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:42.612759 | orchestrator | 2026-01-01 02:40:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:42.612860 | orchestrator | 2026-01-01 02:40:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:42.612903 | orchestrator | 2026-01-01 02:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:45.658305 | orchestrator | 2026-01-01 02:40:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:45.660726 | orchestrator | 2026-01-01 02:40:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:45.660801 | orchestrator | 2026-01-01 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:48.711386 | orchestrator | 2026-01-01 02:40:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:48.713331 | orchestrator | 2026-01-01 02:40:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:48.713370 | orchestrator | 2026-01-01 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:51.770348 | orchestrator | 2026-01-01 02:40:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:51.772524 | orchestrator | 2026-01-01 02:40:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:51.772586 | orchestrator | 2026-01-01 02:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:54.825575 | orchestrator | 2026-01-01 02:40:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:54.828445 | orchestrator | 2026-01-01 02:40:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:54.828485 | orchestrator | 2026-01-01 02:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:40:57.882562 | orchestrator | 2026-01-01 02:40:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:40:57.885191 | orchestrator | 2026-01-01 02:40:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:40:57.885229 | orchestrator | 2026-01-01 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:00.932543 | orchestrator | 2026-01-01 02:41:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:00.933288 | orchestrator | 2026-01-01 02:41:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:00.933343 | orchestrator | 2026-01-01 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:03.982486 | orchestrator | 2026-01-01 02:41:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:03.984627 | orchestrator | 2026-01-01 02:41:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:03.984712 | orchestrator | 2026-01-01 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:07.034013 | orchestrator | 2026-01-01 02:41:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:07.035303 | orchestrator | 2026-01-01 02:41:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:07.035324 | orchestrator | 2026-01-01 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:10.087840 | orchestrator | 2026-01-01 02:41:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:10.089146 | orchestrator | 2026-01-01 02:41:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:10.089197 | orchestrator | 2026-01-01 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:13.131801 | orchestrator | 2026-01-01 02:41:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:13.133586 | orchestrator | 2026-01-01 02:41:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:13.133640 | orchestrator | 2026-01-01 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:16.189880 | orchestrator | 2026-01-01 02:41:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:16.191566 | orchestrator | 2026-01-01 02:41:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:16.191674 | orchestrator | 2026-01-01 02:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:19.239348 | orchestrator | 2026-01-01 02:41:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:19.241889 | orchestrator | 2026-01-01 02:41:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:19.241931 | orchestrator | 2026-01-01 02:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:22.289600 | orchestrator | 2026-01-01 02:41:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:22.290842 | orchestrator | 2026-01-01 02:41:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:22.290893 | orchestrator | 2026-01-01 02:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:25.338846 | orchestrator | 2026-01-01 02:41:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:25.340345 | orchestrator | 2026-01-01 02:41:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:25.340366 | orchestrator | 2026-01-01 02:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:28.399308 | orchestrator | 2026-01-01 02:41:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:28.401991 | orchestrator | 2026-01-01 02:41:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:28.402066 | orchestrator | 2026-01-01 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:31.451793 | orchestrator | 2026-01-01 02:41:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:31.454349 | orchestrator | 2026-01-01 02:41:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:31.454436 | orchestrator | 2026-01-01 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:34.496341 | orchestrator | 2026-01-01 02:41:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:34.496902 | orchestrator | 2026-01-01 02:41:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:34.496938 | orchestrator | 2026-01-01 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:37.539405 | orchestrator | 2026-01-01 02:41:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:37.540520 | orchestrator | 2026-01-01 02:41:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:37.540562 | orchestrator | 2026-01-01 02:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:40.595990 | orchestrator | 2026-01-01 02:41:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:40.597050 | orchestrator | 2026-01-01 02:41:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:40.597099 | orchestrator | 2026-01-01 02:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:43.646708 | orchestrator | 2026-01-01 02:41:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:43.649373 | orchestrator | 2026-01-01 02:41:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:43.649486 | orchestrator | 2026-01-01 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:46.693579 | orchestrator | 2026-01-01 02:41:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:46.696438 | orchestrator | 2026-01-01 02:41:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:46.696493 | orchestrator | 2026-01-01 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:49.744625 | orchestrator | 2026-01-01 02:41:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:49.745597 | orchestrator | 2026-01-01 02:41:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:49.745656 | orchestrator | 2026-01-01 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:52.798181 | orchestrator | 2026-01-01 02:41:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:52.800221 | orchestrator | 2026-01-01 02:41:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:52.800266 | orchestrator | 2026-01-01 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:55.851005 | orchestrator | 2026-01-01 02:41:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:55.853015 | orchestrator | 2026-01-01 02:41:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:55.853139 | orchestrator | 2026-01-01 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:41:58.903370 | orchestrator | 2026-01-01 02:41:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:41:58.905967 | orchestrator | 2026-01-01 02:41:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:41:58.906125 | orchestrator | 2026-01-01 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:01.958854 | orchestrator | 2026-01-01 02:42:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:01.961605 | orchestrator | 2026-01-01 02:42:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:01.961672 | orchestrator | 2026-01-01 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:05.007486 | orchestrator | 2026-01-01 02:42:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:05.008268 | orchestrator | 2026-01-01 02:42:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:05.008317 | orchestrator | 2026-01-01 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:08.055494 | orchestrator | 2026-01-01 02:42:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:08.055717 | orchestrator | 2026-01-01 02:42:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:08.055738 | orchestrator | 2026-01-01 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:11.096831 | orchestrator | 2026-01-01 02:42:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:11.099750 | orchestrator | 2026-01-01 02:42:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:11.099888 | orchestrator | 2026-01-01 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:14.149782 | orchestrator | 2026-01-01 02:42:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:14.152277 | orchestrator | 2026-01-01 02:42:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:14.152362 | orchestrator | 2026-01-01 02:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:17.200661 | orchestrator | 2026-01-01 02:42:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:17.204105 | orchestrator | 2026-01-01 02:42:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:17.204161 | orchestrator | 2026-01-01 02:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:20.251213 | orchestrator | 2026-01-01 02:42:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:20.253020 | orchestrator | 2026-01-01 02:42:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:20.253087 | orchestrator | 2026-01-01 02:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:23.307041 | orchestrator | 2026-01-01 02:42:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:23.310743 | orchestrator | 2026-01-01 02:42:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:23.310815 | orchestrator | 2026-01-01 02:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:26.357509 | orchestrator | 2026-01-01 02:42:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:26.358843 | orchestrator | 2026-01-01 02:42:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:26.358881 | orchestrator | 2026-01-01 02:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:29.414058 | orchestrator | 2026-01-01 02:42:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:29.415416 | orchestrator | 2026-01-01 02:42:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:29.415727 | orchestrator | 2026-01-01 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:32.466459 | orchestrator | 2026-01-01 02:42:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:32.467820 | orchestrator | 2026-01-01 02:42:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:32.468010 | orchestrator | 2026-01-01 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:35.524274 | orchestrator | 2026-01-01 02:42:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:35.526266 | orchestrator | 2026-01-01 02:42:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:35.526297 | orchestrator | 2026-01-01 02:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:38.569537 | orchestrator | 2026-01-01 02:42:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:38.570817 | orchestrator | 2026-01-01 02:42:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:38.570879 | orchestrator | 2026-01-01 02:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:41.619455 | orchestrator | 2026-01-01 02:42:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:41.622208 | orchestrator | 2026-01-01 02:42:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:41.622307 | orchestrator | 2026-01-01 02:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:44.677169 | orchestrator | 2026-01-01 02:42:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:44.678423 | orchestrator | 2026-01-01 02:42:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:44.678480 | orchestrator | 2026-01-01 02:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:47.719968 | orchestrator | 2026-01-01 02:42:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:47.721035 | orchestrator | 2026-01-01 02:42:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:47.721193 | orchestrator | 2026-01-01 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:50.766472 | orchestrator | 2026-01-01 02:42:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:50.767905 | orchestrator | 2026-01-01 02:42:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:50.767961 | orchestrator | 2026-01-01 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:53.808319 | orchestrator | 2026-01-01 02:42:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:53.810530 | orchestrator | 2026-01-01 02:42:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:53.810642 | orchestrator | 2026-01-01 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:56.862863 | orchestrator | 2026-01-01 02:42:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:56.864774 | orchestrator | 2026-01-01 02:42:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:56.864955 | orchestrator | 2026-01-01 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:42:59.915770 | orchestrator | 2026-01-01 02:42:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:42:59.918282 | orchestrator | 2026-01-01 02:42:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:42:59.918342 | orchestrator | 2026-01-01 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:02.965731 | orchestrator | 2026-01-01 02:43:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:02.967886 | orchestrator | 2026-01-01 02:43:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:02.967926 | orchestrator | 2026-01-01 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:06.018272 | orchestrator | 2026-01-01 02:43:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:06.022945 | orchestrator | 2026-01-01 02:43:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:06.023020 | orchestrator | 2026-01-01 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:09.056591 | orchestrator | 2026-01-01 02:43:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:09.058336 | orchestrator | 2026-01-01 02:43:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:09.058407 | orchestrator | 2026-01-01 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:12.101104 | orchestrator | 2026-01-01 02:43:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:12.101804 | orchestrator | 2026-01-01 02:43:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:12.101838 | orchestrator | 2026-01-01 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:15.150803 | orchestrator | 2026-01-01 02:43:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:15.152615 | orchestrator | 2026-01-01 02:43:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:15.152710 | orchestrator | 2026-01-01 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:18.198458 | orchestrator | 2026-01-01 02:43:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:18.199196 | orchestrator | 2026-01-01 02:43:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:18.199235 | orchestrator | 2026-01-01 02:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:21.247722 | orchestrator | 2026-01-01 02:43:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:21.249345 | orchestrator | 2026-01-01 02:43:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:21.249432 | orchestrator | 2026-01-01 02:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:24.289460 | orchestrator | 2026-01-01 02:43:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:24.292707 | orchestrator | 2026-01-01 02:43:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:24.292791 | orchestrator | 2026-01-01 02:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:27.342607 | orchestrator | 2026-01-01 02:43:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:27.345584 | orchestrator | 2026-01-01 02:43:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:27.345673 | orchestrator | 2026-01-01 02:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:30.398526 | orchestrator | 2026-01-01 02:43:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:30.400515 | orchestrator | 2026-01-01 02:43:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:30.400600 | orchestrator | 2026-01-01 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:33.457244 | orchestrator | 2026-01-01 02:43:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:33.458957 | orchestrator | 2026-01-01 02:43:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:33.458997 | orchestrator | 2026-01-01 02:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:36.512454 | orchestrator | 2026-01-01 02:43:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:36.513633 | orchestrator | 2026-01-01 02:43:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:36.513662 | orchestrator | 2026-01-01 02:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:39.552766 | orchestrator | 2026-01-01 02:43:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:39.554215 | orchestrator | 2026-01-01 02:43:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:39.554295 | orchestrator | 2026-01-01 02:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:42.608527 | orchestrator | 2026-01-01 02:43:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:42.609827 | orchestrator | 2026-01-01 02:43:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:42.609879 | orchestrator | 2026-01-01 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:45.666582 | orchestrator | 2026-01-01 02:43:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:45.668927 | orchestrator | 2026-01-01 02:43:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:45.669234 | orchestrator | 2026-01-01 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:48.712848 | orchestrator | 2026-01-01 02:43:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:48.712983 | orchestrator | 2026-01-01 02:43:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:48.714172 | orchestrator | 2026-01-01 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:51.769377 | orchestrator | 2026-01-01 02:43:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:51.771427 | orchestrator | 2026-01-01 02:43:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:51.771476 | orchestrator | 2026-01-01 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:54.819034 | orchestrator | 2026-01-01 02:43:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:54.823535 | orchestrator | 2026-01-01 02:43:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:54.823598 | orchestrator | 2026-01-01 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:43:57.875338 | orchestrator | 2026-01-01 02:43:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:43:57.878607 | orchestrator | 2026-01-01 02:43:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:43:57.878679 | orchestrator | 2026-01-01 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:00.933214 | orchestrator | 2026-01-01 02:44:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:00.935935 | orchestrator | 2026-01-01 02:44:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:00.935983 | orchestrator | 2026-01-01 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:03.989133 | orchestrator | 2026-01-01 02:44:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:03.990379 | orchestrator | 2026-01-01 02:44:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:03.990428 | orchestrator | 2026-01-01 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:07.046437 | orchestrator | 2026-01-01 02:44:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:07.048207 | orchestrator | 2026-01-01 02:44:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:07.048273 | orchestrator | 2026-01-01 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:10.099557 | orchestrator | 2026-01-01 02:44:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:10.101896 | orchestrator | 2026-01-01 02:44:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:10.101949 | orchestrator | 2026-01-01 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:13.152764 | orchestrator | 2026-01-01 02:44:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:13.155303 | orchestrator | 2026-01-01 02:44:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:13.155343 | orchestrator | 2026-01-01 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:16.200300 | orchestrator | 2026-01-01 02:44:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:16.202974 | orchestrator | 2026-01-01 02:44:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:16.203143 | orchestrator | 2026-01-01 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:19.248843 | orchestrator | 2026-01-01 02:44:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:19.249974 | orchestrator | 2026-01-01 02:44:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:19.250008 | orchestrator | 2026-01-01 02:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:22.300065 | orchestrator | 2026-01-01 02:44:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:22.302889 | orchestrator | 2026-01-01 02:44:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:22.303200 | orchestrator | 2026-01-01 02:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:25.349237 | orchestrator | 2026-01-01 02:44:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:25.352203 | orchestrator | 2026-01-01 02:44:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:25.352256 | orchestrator | 2026-01-01 02:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:28.400490 | orchestrator | 2026-01-01 02:44:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:28.400934 | orchestrator | 2026-01-01 02:44:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:28.400979 | orchestrator | 2026-01-01 02:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:31.452831 | orchestrator | 2026-01-01 02:44:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:31.454401 | orchestrator | 2026-01-01 02:44:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:31.454476 | orchestrator | 2026-01-01 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:34.503770 | orchestrator | 2026-01-01 02:44:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:34.505639 | orchestrator | 2026-01-01 02:44:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:34.505692 | orchestrator | 2026-01-01 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:37.557250 | orchestrator | 2026-01-01 02:44:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:37.558711 | orchestrator | 2026-01-01 02:44:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:37.558794 | orchestrator | 2026-01-01 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:40.606229 | orchestrator | 2026-01-01 02:44:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:40.607361 | orchestrator | 2026-01-01 02:44:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:40.607394 | orchestrator | 2026-01-01 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:43.659468 | orchestrator | 2026-01-01 02:44:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:43.660937 | orchestrator | 2026-01-01 02:44:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:43.660966 | orchestrator | 2026-01-01 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:46.711592 | orchestrator | 2026-01-01 02:44:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:46.713978 | orchestrator | 2026-01-01 02:44:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:46.714192 | orchestrator | 2026-01-01 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:49.754981 | orchestrator | 2026-01-01 02:44:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:49.756668 | orchestrator | 2026-01-01 02:44:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:49.756725 | orchestrator | 2026-01-01 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:52.808408 | orchestrator | 2026-01-01 02:44:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:52.811249 | orchestrator | 2026-01-01 02:44:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:52.811491 | orchestrator | 2026-01-01 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:55.865896 | orchestrator | 2026-01-01 02:44:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:55.868172 | orchestrator | 2026-01-01 02:44:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:55.868228 | orchestrator | 2026-01-01 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:44:58.918623 | orchestrator | 2026-01-01 02:44:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:44:58.921710 | orchestrator | 2026-01-01 02:44:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:44:58.922197 | orchestrator | 2026-01-01 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:01.970827 | orchestrator | 2026-01-01 02:45:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:01.972264 | orchestrator | 2026-01-01 02:45:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:01.972299 | orchestrator | 2026-01-01 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:05.025412 | orchestrator | 2026-01-01 02:45:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:05.028265 | orchestrator | 2026-01-01 02:45:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:05.028347 | orchestrator | 2026-01-01 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:08.074247 | orchestrator | 2026-01-01 02:45:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:08.076148 | orchestrator | 2026-01-01 02:45:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:08.078681 | orchestrator | 2026-01-01 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:11.126761 | orchestrator | 2026-01-01 02:45:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:11.129589 | orchestrator | 2026-01-01 02:45:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:11.129682 | orchestrator | 2026-01-01 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:14.177338 | orchestrator | 2026-01-01 02:45:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:14.178837 | orchestrator | 2026-01-01 02:45:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:14.178924 | orchestrator | 2026-01-01 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:17.225188 | orchestrator | 2026-01-01 02:45:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:17.226354 | orchestrator | 2026-01-01 02:45:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:17.226410 | orchestrator | 2026-01-01 02:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:20.279284 | orchestrator | 2026-01-01 02:45:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:20.280629 | orchestrator | 2026-01-01 02:45:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:20.280689 | orchestrator | 2026-01-01 02:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:23.331386 | orchestrator | 2026-01-01 02:45:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:23.333130 | orchestrator | 2026-01-01 02:45:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:23.333196 | orchestrator | 2026-01-01 02:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:26.382927 | orchestrator | 2026-01-01 02:45:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:26.384979 | orchestrator | 2026-01-01 02:45:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:26.385091 | orchestrator | 2026-01-01 02:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:29.439905 | orchestrator | 2026-01-01 02:45:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:29.441251 | orchestrator | 2026-01-01 02:45:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:29.442840 | orchestrator | 2026-01-01 02:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:32.497103 | orchestrator | 2026-01-01 02:45:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:32.497984 | orchestrator | 2026-01-01 02:45:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:32.498117 | orchestrator | 2026-01-01 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:35.544089 | orchestrator | 2026-01-01 02:45:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:35.544578 | orchestrator | 2026-01-01 02:45:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:35.544606 | orchestrator | 2026-01-01 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:38.592188 | orchestrator | 2026-01-01 02:45:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:38.592640 | orchestrator | 2026-01-01 02:45:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:38.592683 | orchestrator | 2026-01-01 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:41.646840 | orchestrator | 2026-01-01 02:45:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:41.647892 | orchestrator | 2026-01-01 02:45:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:41.647926 | orchestrator | 2026-01-01 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:44.697256 | orchestrator | 2026-01-01 02:45:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:44.698892 | orchestrator | 2026-01-01 02:45:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:44.698935 | orchestrator | 2026-01-01 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:47.750352 | orchestrator | 2026-01-01 02:45:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:47.750943 | orchestrator | 2026-01-01 02:45:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:47.750965 | orchestrator | 2026-01-01 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:50.790368 | orchestrator | 2026-01-01 02:45:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:50.795460 | orchestrator | 2026-01-01 02:45:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:50.795557 | orchestrator | 2026-01-01 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:53.853651 | orchestrator | 2026-01-01 02:45:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:53.857253 | orchestrator | 2026-01-01 02:45:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:53.857304 | orchestrator | 2026-01-01 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:56.916966 | orchestrator | 2026-01-01 02:45:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:56.919389 | orchestrator | 2026-01-01 02:45:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:56.919454 | orchestrator | 2026-01-01 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:45:59.969737 | orchestrator | 2026-01-01 02:45:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:45:59.970929 | orchestrator | 2026-01-01 02:45:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:45:59.970987 | orchestrator | 2026-01-01 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:03.028067 | orchestrator | 2026-01-01 02:46:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:03.030275 | orchestrator | 2026-01-01 02:46:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:03.030340 | orchestrator | 2026-01-01 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:06.076226 | orchestrator | 2026-01-01 02:46:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:06.078593 | orchestrator | 2026-01-01 02:46:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:06.078668 | orchestrator | 2026-01-01 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:09.123593 | orchestrator | 2026-01-01 02:46:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:09.126176 | orchestrator | 2026-01-01 02:46:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:09.126273 | orchestrator | 2026-01-01 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:12.180734 | orchestrator | 2026-01-01 02:46:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:12.183451 | orchestrator | 2026-01-01 02:46:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:12.183520 | orchestrator | 2026-01-01 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:15.232384 | orchestrator | 2026-01-01 02:46:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:15.234163 | orchestrator | 2026-01-01 02:46:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:15.234255 | orchestrator | 2026-01-01 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:18.281132 | orchestrator | 2026-01-01 02:46:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:18.282836 | orchestrator | 2026-01-01 02:46:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:18.282904 | orchestrator | 2026-01-01 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:21.326582 | orchestrator | 2026-01-01 02:46:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:21.329159 | orchestrator | 2026-01-01 02:46:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:21.329202 | orchestrator | 2026-01-01 02:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:24.374913 | orchestrator | 2026-01-01 02:46:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:24.377396 | orchestrator | 2026-01-01 02:46:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:24.377538 | orchestrator | 2026-01-01 02:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:27.428081 | orchestrator | 2026-01-01 02:46:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:27.430968 | orchestrator | 2026-01-01 02:46:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:27.431051 | orchestrator | 2026-01-01 02:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:30.473943 | orchestrator | 2026-01-01 02:46:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:30.474980 | orchestrator | 2026-01-01 02:46:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:30.475042 | orchestrator | 2026-01-01 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:33.525098 | orchestrator | 2026-01-01 02:46:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:33.526158 | orchestrator | 2026-01-01 02:46:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:33.526209 | orchestrator | 2026-01-01 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:36.566438 | orchestrator | 2026-01-01 02:46:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:36.567307 | orchestrator | 2026-01-01 02:46:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:36.567622 | orchestrator | 2026-01-01 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:39.612595 | orchestrator | 2026-01-01 02:46:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:39.613812 | orchestrator | 2026-01-01 02:46:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:39.613862 | orchestrator | 2026-01-01 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:42.660446 | orchestrator | 2026-01-01 02:46:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:42.661147 | orchestrator | 2026-01-01 02:46:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:42.661191 | orchestrator | 2026-01-01 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:45.709809 | orchestrator | 2026-01-01 02:46:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:45.711611 | orchestrator | 2026-01-01 02:46:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:45.711672 | orchestrator | 2026-01-01 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:48.764339 | orchestrator | 2026-01-01 02:46:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:48.766827 | orchestrator | 2026-01-01 02:46:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:48.766873 | orchestrator | 2026-01-01 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:51.815890 | orchestrator | 2026-01-01 02:46:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:51.817832 | orchestrator | 2026-01-01 02:46:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:51.817947 | orchestrator | 2026-01-01 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:54.864512 | orchestrator | 2026-01-01 02:46:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:54.866207 | orchestrator | 2026-01-01 02:46:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:54.866264 | orchestrator | 2026-01-01 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:46:57.909216 | orchestrator | 2026-01-01 02:46:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:46:57.909917 | orchestrator | 2026-01-01 02:46:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:46:57.910142 | orchestrator | 2026-01-01 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:00.957479 | orchestrator | 2026-01-01 02:47:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:00.959456 | orchestrator | 2026-01-01 02:47:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:00.959563 | orchestrator | 2026-01-01 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:04.013503 | orchestrator | 2026-01-01 02:47:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:04.014732 | orchestrator | 2026-01-01 02:47:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:04.015026 | orchestrator | 2026-01-01 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:07.059642 | orchestrator | 2026-01-01 02:47:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:07.060763 | orchestrator | 2026-01-01 02:47:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:07.060806 | orchestrator | 2026-01-01 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:10.110550 | orchestrator | 2026-01-01 02:47:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:10.112304 | orchestrator | 2026-01-01 02:47:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:10.112478 | orchestrator | 2026-01-01 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:13.162688 | orchestrator | 2026-01-01 02:47:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:13.165135 | orchestrator | 2026-01-01 02:47:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:13.165297 | orchestrator | 2026-01-01 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:16.214364 | orchestrator | 2026-01-01 02:47:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:16.216793 | orchestrator | 2026-01-01 02:47:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:16.216843 | orchestrator | 2026-01-01 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:19.263146 | orchestrator | 2026-01-01 02:47:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:19.264604 | orchestrator | 2026-01-01 02:47:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:19.264777 | orchestrator | 2026-01-01 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:22.310232 | orchestrator | 2026-01-01 02:47:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:22.311688 | orchestrator | 2026-01-01 02:47:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:22.311723 | orchestrator | 2026-01-01 02:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:25.359802 | orchestrator | 2026-01-01 02:47:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:25.361910 | orchestrator | 2026-01-01 02:47:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:25.361966 | orchestrator | 2026-01-01 02:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:28.416592 | orchestrator | 2026-01-01 02:47:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:28.418813 | orchestrator | 2026-01-01 02:47:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:28.419059 | orchestrator | 2026-01-01 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:31.471647 | orchestrator | 2026-01-01 02:47:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:31.473729 | orchestrator | 2026-01-01 02:47:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:31.473791 | orchestrator | 2026-01-01 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:34.524674 | orchestrator | 2026-01-01 02:47:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:34.526248 | orchestrator | 2026-01-01 02:47:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:34.526308 | orchestrator | 2026-01-01 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:37.565825 | orchestrator | 2026-01-01 02:47:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:37.566912 | orchestrator | 2026-01-01 02:47:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:37.566928 | orchestrator | 2026-01-01 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:40.617589 | orchestrator | 2026-01-01 02:47:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:40.620583 | orchestrator | 2026-01-01 02:47:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:40.620647 | orchestrator | 2026-01-01 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:43.668102 | orchestrator | 2026-01-01 02:47:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:43.669539 | orchestrator | 2026-01-01 02:47:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:43.669567 | orchestrator | 2026-01-01 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:46.719704 | orchestrator | 2026-01-01 02:47:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:46.723329 | orchestrator | 2026-01-01 02:47:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:46.723398 | orchestrator | 2026-01-01 02:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:49.774662 | orchestrator | 2026-01-01 02:47:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:49.778602 | orchestrator | 2026-01-01 02:47:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:49.778679 | orchestrator | 2026-01-01 02:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:52.832865 | orchestrator | 2026-01-01 02:47:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:52.835715 | orchestrator | 2026-01-01 02:47:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:52.835784 | orchestrator | 2026-01-01 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:55.889131 | orchestrator | 2026-01-01 02:47:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:55.891322 | orchestrator | 2026-01-01 02:47:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:55.891417 | orchestrator | 2026-01-01 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:47:58.948232 | orchestrator | 2026-01-01 02:47:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:47:58.949408 | orchestrator | 2026-01-01 02:47:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:47:58.949440 | orchestrator | 2026-01-01 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:02.009453 | orchestrator | 2026-01-01 02:48:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:02.012185 | orchestrator | 2026-01-01 02:48:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:02.012255 | orchestrator | 2026-01-01 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:05.069852 | orchestrator | 2026-01-01 02:48:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:05.070869 | orchestrator | 2026-01-01 02:48:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:05.071093 | orchestrator | 2026-01-01 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:08.120759 | orchestrator | 2026-01-01 02:48:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:08.122835 | orchestrator | 2026-01-01 02:48:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:08.122900 | orchestrator | 2026-01-01 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:11.171242 | orchestrator | 2026-01-01 02:48:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:11.173918 | orchestrator | 2026-01-01 02:48:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:11.174112 | orchestrator | 2026-01-01 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:14.224502 | orchestrator | 2026-01-01 02:48:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:14.226748 | orchestrator | 2026-01-01 02:48:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:14.226842 | orchestrator | 2026-01-01 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:17.277082 | orchestrator | 2026-01-01 02:48:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:17.277688 | orchestrator | 2026-01-01 02:48:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:17.277778 | orchestrator | 2026-01-01 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:20.334539 | orchestrator | 2026-01-01 02:48:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:20.334735 | orchestrator | 2026-01-01 02:48:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:20.334761 | orchestrator | 2026-01-01 02:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:23.390192 | orchestrator | 2026-01-01 02:48:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:23.392995 | orchestrator | 2026-01-01 02:48:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:23.393113 | orchestrator | 2026-01-01 02:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:26.447309 | orchestrator | 2026-01-01 02:48:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:26.449135 | orchestrator | 2026-01-01 02:48:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:26.449220 | orchestrator | 2026-01-01 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:29.503728 | orchestrator | 2026-01-01 02:48:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:29.505266 | orchestrator | 2026-01-01 02:48:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:29.505319 | orchestrator | 2026-01-01 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:32.562344 | orchestrator | 2026-01-01 02:48:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:32.563696 | orchestrator | 2026-01-01 02:48:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:32.563760 | orchestrator | 2026-01-01 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:35.613433 | orchestrator | 2026-01-01 02:48:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:35.614853 | orchestrator | 2026-01-01 02:48:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:35.614899 | orchestrator | 2026-01-01 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:38.663642 | orchestrator | 2026-01-01 02:48:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:38.665511 | orchestrator | 2026-01-01 02:48:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:38.665553 | orchestrator | 2026-01-01 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:41.716097 | orchestrator | 2026-01-01 02:48:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:41.717324 | orchestrator | 2026-01-01 02:48:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:41.717625 | orchestrator | 2026-01-01 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:44.763362 | orchestrator | 2026-01-01 02:48:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:44.765184 | orchestrator | 2026-01-01 02:48:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:44.765233 | orchestrator | 2026-01-01 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:47.813740 | orchestrator | 2026-01-01 02:48:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:47.815778 | orchestrator | 2026-01-01 02:48:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:47.815833 | orchestrator | 2026-01-01 02:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:50.866592 | orchestrator | 2026-01-01 02:48:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:50.869125 | orchestrator | 2026-01-01 02:48:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:50.869222 | orchestrator | 2026-01-01 02:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:53.922097 | orchestrator | 2026-01-01 02:48:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:53.924671 | orchestrator | 2026-01-01 02:48:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:53.924737 | orchestrator | 2026-01-01 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:48:56.972919 | orchestrator | 2026-01-01 02:48:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:48:56.974994 | orchestrator | 2026-01-01 02:48:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:48:56.975051 | orchestrator | 2026-01-01 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:00.026326 | orchestrator | 2026-01-01 02:49:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:00.029362 | orchestrator | 2026-01-01 02:49:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:00.029421 | orchestrator | 2026-01-01 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:03.087029 | orchestrator | 2026-01-01 02:49:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:03.089234 | orchestrator | 2026-01-01 02:49:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:03.089314 | orchestrator | 2026-01-01 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:06.141515 | orchestrator | 2026-01-01 02:49:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:06.142498 | orchestrator | 2026-01-01 02:49:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:06.142514 | orchestrator | 2026-01-01 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:09.188941 | orchestrator | 2026-01-01 02:49:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:09.190944 | orchestrator | 2026-01-01 02:49:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:09.191042 | orchestrator | 2026-01-01 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:12.244748 | orchestrator | 2026-01-01 02:49:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:12.245869 | orchestrator | 2026-01-01 02:49:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:12.245904 | orchestrator | 2026-01-01 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:15.298346 | orchestrator | 2026-01-01 02:49:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:15.301275 | orchestrator | 2026-01-01 02:49:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:15.301356 | orchestrator | 2026-01-01 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:18.364999 | orchestrator | 2026-01-01 02:49:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:18.367521 | orchestrator | 2026-01-01 02:49:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:18.367583 | orchestrator | 2026-01-01 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:21.416918 | orchestrator | 2026-01-01 02:49:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:21.418690 | orchestrator | 2026-01-01 02:49:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:21.418724 | orchestrator | 2026-01-01 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:24.467515 | orchestrator | 2026-01-01 02:49:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:24.470281 | orchestrator | 2026-01-01 02:49:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:24.470484 | orchestrator | 2026-01-01 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:27.516140 | orchestrator | 2026-01-01 02:49:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:27.517568 | orchestrator | 2026-01-01 02:49:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:27.517721 | orchestrator | 2026-01-01 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:30.564337 | orchestrator | 2026-01-01 02:49:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:30.565821 | orchestrator | 2026-01-01 02:49:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:30.565866 | orchestrator | 2026-01-01 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:33.620326 | orchestrator | 2026-01-01 02:49:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:33.622177 | orchestrator | 2026-01-01 02:49:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:33.622266 | orchestrator | 2026-01-01 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:36.672457 | orchestrator | 2026-01-01 02:49:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:36.674280 | orchestrator | 2026-01-01 02:49:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:36.674320 | orchestrator | 2026-01-01 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:39.725905 | orchestrator | 2026-01-01 02:49:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:39.726625 | orchestrator | 2026-01-01 02:49:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:39.726804 | orchestrator | 2026-01-01 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:42.779744 | orchestrator | 2026-01-01 02:49:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:42.781638 | orchestrator | 2026-01-01 02:49:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:42.781676 | orchestrator | 2026-01-01 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:45.831897 | orchestrator | 2026-01-01 02:49:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:45.834106 | orchestrator | 2026-01-01 02:49:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:45.834160 | orchestrator | 2026-01-01 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:48.897533 | orchestrator | 2026-01-01 02:49:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:48.899899 | orchestrator | 2026-01-01 02:49:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:48.899983 | orchestrator | 2026-01-01 02:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:51.944511 | orchestrator | 2026-01-01 02:49:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:51.947310 | orchestrator | 2026-01-01 02:49:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:51.947413 | orchestrator | 2026-01-01 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:54.995580 | orchestrator | 2026-01-01 02:49:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:54.998582 | orchestrator | 2026-01-01 02:49:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:54.998670 | orchestrator | 2026-01-01 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:49:58.052008 | orchestrator | 2026-01-01 02:49:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:49:58.052432 | orchestrator | 2026-01-01 02:49:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:49:58.052466 | orchestrator | 2026-01-01 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:01.092046 | orchestrator | 2026-01-01 02:50:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:01.092715 | orchestrator | 2026-01-01 02:50:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:01.092741 | orchestrator | 2026-01-01 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:04.130606 | orchestrator | 2026-01-01 02:50:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:04.133091 | orchestrator | 2026-01-01 02:50:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:04.133233 | orchestrator | 2026-01-01 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:07.169417 | orchestrator | 2026-01-01 02:50:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:07.170891 | orchestrator | 2026-01-01 02:50:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:07.171032 | orchestrator | 2026-01-01 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:10.208467 | orchestrator | 2026-01-01 02:50:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:10.209715 | orchestrator | 2026-01-01 02:50:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:10.209759 | orchestrator | 2026-01-01 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:13.247150 | orchestrator | 2026-01-01 02:50:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:13.249063 | orchestrator | 2026-01-01 02:50:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:13.249125 | orchestrator | 2026-01-01 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:16.288664 | orchestrator | 2026-01-01 02:50:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:16.290182 | orchestrator | 2026-01-01 02:50:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:16.290209 | orchestrator | 2026-01-01 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:19.337330 | orchestrator | 2026-01-01 02:50:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:19.339268 | orchestrator | 2026-01-01 02:50:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:19.339452 | orchestrator | 2026-01-01 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:22.390407 | orchestrator | 2026-01-01 02:50:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:22.392324 | orchestrator | 2026-01-01 02:50:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:22.392497 | orchestrator | 2026-01-01 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:25.440389 | orchestrator | 2026-01-01 02:50:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:25.441894 | orchestrator | 2026-01-01 02:50:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:25.441931 | orchestrator | 2026-01-01 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:28.486236 | orchestrator | 2026-01-01 02:50:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:28.487855 | orchestrator | 2026-01-01 02:50:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:28.487921 | orchestrator | 2026-01-01 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:31.526594 | orchestrator | 2026-01-01 02:50:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:31.527705 | orchestrator | 2026-01-01 02:50:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:31.527773 | orchestrator | 2026-01-01 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:34.568399 | orchestrator | 2026-01-01 02:50:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:34.570408 | orchestrator | 2026-01-01 02:50:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:34.570481 | orchestrator | 2026-01-01 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:37.604452 | orchestrator | 2026-01-01 02:50:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:37.605496 | orchestrator | 2026-01-01 02:50:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:37.605579 | orchestrator | 2026-01-01 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:40.642553 | orchestrator | 2026-01-01 02:50:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:40.642980 | orchestrator | 2026-01-01 02:50:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:40.643032 | orchestrator | 2026-01-01 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:43.685036 | orchestrator | 2026-01-01 02:50:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:43.686159 | orchestrator | 2026-01-01 02:50:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:43.686243 | orchestrator | 2026-01-01 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:46.733442 | orchestrator | 2026-01-01 02:50:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:46.735339 | orchestrator | 2026-01-01 02:50:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:46.735390 | orchestrator | 2026-01-01 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:49.790240 | orchestrator | 2026-01-01 02:50:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:49.791760 | orchestrator | 2026-01-01 02:50:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:49.791801 | orchestrator | 2026-01-01 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:52.838365 | orchestrator | 2026-01-01 02:50:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:52.839225 | orchestrator | 2026-01-01 02:50:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:52.839251 | orchestrator | 2026-01-01 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:55.875625 | orchestrator | 2026-01-01 02:50:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:55.875836 | orchestrator | 2026-01-01 02:50:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:55.875867 | orchestrator | 2026-01-01 02:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:50:58.930738 | orchestrator | 2026-01-01 02:50:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:50:58.934633 | orchestrator | 2026-01-01 02:50:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:50:58.935416 | orchestrator | 2026-01-01 02:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:01.979012 | orchestrator | 2026-01-01 02:51:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:01.980179 | orchestrator | 2026-01-01 02:51:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:01.980220 | orchestrator | 2026-01-01 02:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:05.034676 | orchestrator | 2026-01-01 02:51:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:05.037270 | orchestrator | 2026-01-01 02:51:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:05.037320 | orchestrator | 2026-01-01 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:08.084285 | orchestrator | 2026-01-01 02:51:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:08.084409 | orchestrator | 2026-01-01 02:51:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:08.084425 | orchestrator | 2026-01-01 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:11.131734 | orchestrator | 2026-01-01 02:51:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:11.133485 | orchestrator | 2026-01-01 02:51:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:11.133510 | orchestrator | 2026-01-01 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:14.190707 | orchestrator | 2026-01-01 02:51:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:14.192016 | orchestrator | 2026-01-01 02:51:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:14.192163 | orchestrator | 2026-01-01 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:17.242282 | orchestrator | 2026-01-01 02:51:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:17.244067 | orchestrator | 2026-01-01 02:51:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:17.244163 | orchestrator | 2026-01-01 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:20.297459 | orchestrator | 2026-01-01 02:51:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:20.299155 | orchestrator | 2026-01-01 02:51:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:20.299205 | orchestrator | 2026-01-01 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:23.345545 | orchestrator | 2026-01-01 02:51:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:23.348187 | orchestrator | 2026-01-01 02:51:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:23.348224 | orchestrator | 2026-01-01 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:26.400497 | orchestrator | 2026-01-01 02:51:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:26.403366 | orchestrator | 2026-01-01 02:51:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:26.403416 | orchestrator | 2026-01-01 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:29.455168 | orchestrator | 2026-01-01 02:51:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:29.457480 | orchestrator | 2026-01-01 02:51:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:29.457889 | orchestrator | 2026-01-01 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:32.508089 | orchestrator | 2026-01-01 02:51:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:32.510285 | orchestrator | 2026-01-01 02:51:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:32.510398 | orchestrator | 2026-01-01 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:35.567803 | orchestrator | 2026-01-01 02:51:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:35.569940 | orchestrator | 2026-01-01 02:51:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:35.569978 | orchestrator | 2026-01-01 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:38.622315 | orchestrator | 2026-01-01 02:51:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:38.627374 | orchestrator | 2026-01-01 02:51:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:38.627651 | orchestrator | 2026-01-01 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:41.675673 | orchestrator | 2026-01-01 02:51:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:41.676714 | orchestrator | 2026-01-01 02:51:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:41.676753 | orchestrator | 2026-01-01 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:44.730590 | orchestrator | 2026-01-01 02:51:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:44.732323 | orchestrator | 2026-01-01 02:51:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:44.732437 | orchestrator | 2026-01-01 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:47.786184 | orchestrator | 2026-01-01 02:51:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:47.787205 | orchestrator | 2026-01-01 02:51:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:47.787274 | orchestrator | 2026-01-01 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:50.841284 | orchestrator | 2026-01-01 02:51:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:50.844523 | orchestrator | 2026-01-01 02:51:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:50.844612 | orchestrator | 2026-01-01 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:53.890845 | orchestrator | 2026-01-01 02:51:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:53.891618 | orchestrator | 2026-01-01 02:51:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:53.891926 | orchestrator | 2026-01-01 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:56.944278 | orchestrator | 2026-01-01 02:51:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:56.946262 | orchestrator | 2026-01-01 02:51:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:56.946328 | orchestrator | 2026-01-01 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:51:59.997230 | orchestrator | 2026-01-01 02:51:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:51:59.999619 | orchestrator | 2026-01-01 02:51:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:51:59.999683 | orchestrator | 2026-01-01 02:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:03.049045 | orchestrator | 2026-01-01 02:52:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:03.050827 | orchestrator | 2026-01-01 02:52:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:03.050870 | orchestrator | 2026-01-01 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:06.095480 | orchestrator | 2026-01-01 02:52:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:06.095889 | orchestrator | 2026-01-01 02:52:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:06.095916 | orchestrator | 2026-01-01 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:09.141705 | orchestrator | 2026-01-01 02:52:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:09.143581 | orchestrator | 2026-01-01 02:52:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:09.143635 | orchestrator | 2026-01-01 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:12.187008 | orchestrator | 2026-01-01 02:52:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:12.188602 | orchestrator | 2026-01-01 02:52:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:12.188932 | orchestrator | 2026-01-01 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:15.227185 | orchestrator | 2026-01-01 02:52:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:15.228494 | orchestrator | 2026-01-01 02:52:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:15.229293 | orchestrator | 2026-01-01 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:18.274652 | orchestrator | 2026-01-01 02:52:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:18.276688 | orchestrator | 2026-01-01 02:52:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:18.276797 | orchestrator | 2026-01-01 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:21.325957 | orchestrator | 2026-01-01 02:52:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:21.326728 | orchestrator | 2026-01-01 02:52:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:21.326808 | orchestrator | 2026-01-01 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:24.367471 | orchestrator | 2026-01-01 02:52:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:24.368301 | orchestrator | 2026-01-01 02:52:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:24.368913 | orchestrator | 2026-01-01 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:27.414870 | orchestrator | 2026-01-01 02:52:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:27.416357 | orchestrator | 2026-01-01 02:52:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:27.416422 | orchestrator | 2026-01-01 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:30.458750 | orchestrator | 2026-01-01 02:52:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:30.460136 | orchestrator | 2026-01-01 02:52:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:30.460160 | orchestrator | 2026-01-01 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:33.506277 | orchestrator | 2026-01-01 02:52:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:33.509188 | orchestrator | 2026-01-01 02:52:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:33.509270 | orchestrator | 2026-01-01 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:36.559485 | orchestrator | 2026-01-01 02:52:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:36.561592 | orchestrator | 2026-01-01 02:52:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:36.561672 | orchestrator | 2026-01-01 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:39.610888 | orchestrator | 2026-01-01 02:52:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:39.612214 | orchestrator | 2026-01-01 02:52:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:39.612258 | orchestrator | 2026-01-01 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:42.663228 | orchestrator | 2026-01-01 02:52:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:42.665302 | orchestrator | 2026-01-01 02:52:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:42.665349 | orchestrator | 2026-01-01 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:45.708945 | orchestrator | 2026-01-01 02:52:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:45.710501 | orchestrator | 2026-01-01 02:52:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:45.710566 | orchestrator | 2026-01-01 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:48.759414 | orchestrator | 2026-01-01 02:52:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:48.762632 | orchestrator | 2026-01-01 02:52:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:48.762736 | orchestrator | 2026-01-01 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:51.814227 | orchestrator | 2026-01-01 02:52:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:51.817499 | orchestrator | 2026-01-01 02:52:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:51.817566 | orchestrator | 2026-01-01 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:54.863974 | orchestrator | 2026-01-01 02:52:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:54.865904 | orchestrator | 2026-01-01 02:52:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:54.865961 | orchestrator | 2026-01-01 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:52:57.914239 | orchestrator | 2026-01-01 02:52:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:52:57.916457 | orchestrator | 2026-01-01 02:52:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:52:57.916527 | orchestrator | 2026-01-01 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:00.954191 | orchestrator | 2026-01-01 02:53:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:00.955371 | orchestrator | 2026-01-01 02:53:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:00.955416 | orchestrator | 2026-01-01 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:04.006310 | orchestrator | 2026-01-01 02:53:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:04.007958 | orchestrator | 2026-01-01 02:53:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:04.008031 | orchestrator | 2026-01-01 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:07.066757 | orchestrator | 2026-01-01 02:53:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:07.069319 | orchestrator | 2026-01-01 02:53:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:07.069381 | orchestrator | 2026-01-01 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:10.115890 | orchestrator | 2026-01-01 02:53:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:10.118932 | orchestrator | 2026-01-01 02:53:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:10.119009 | orchestrator | 2026-01-01 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:13.169757 | orchestrator | 2026-01-01 02:53:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:13.170584 | orchestrator | 2026-01-01 02:53:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:13.170988 | orchestrator | 2026-01-01 02:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:16.221022 | orchestrator | 2026-01-01 02:53:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:16.221711 | orchestrator | 2026-01-01 02:53:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:16.221752 | orchestrator | 2026-01-01 02:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:19.271442 | orchestrator | 2026-01-01 02:53:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:19.273108 | orchestrator | 2026-01-01 02:53:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:19.273130 | orchestrator | 2026-01-01 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:22.318260 | orchestrator | 2026-01-01 02:53:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:22.319297 | orchestrator | 2026-01-01 02:53:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:22.319351 | orchestrator | 2026-01-01 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:25.368604 | orchestrator | 2026-01-01 02:53:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:25.370431 | orchestrator | 2026-01-01 02:53:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:25.370475 | orchestrator | 2026-01-01 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:28.415472 | orchestrator | 2026-01-01 02:53:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:28.416836 | orchestrator | 2026-01-01 02:53:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:28.416881 | orchestrator | 2026-01-01 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:31.461711 | orchestrator | 2026-01-01 02:53:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:31.463931 | orchestrator | 2026-01-01 02:53:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:31.463988 | orchestrator | 2026-01-01 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:34.513101 | orchestrator | 2026-01-01 02:53:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:34.514911 | orchestrator | 2026-01-01 02:53:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:34.514982 | orchestrator | 2026-01-01 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:37.561690 | orchestrator | 2026-01-01 02:53:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:37.564677 | orchestrator | 2026-01-01 02:53:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:37.564722 | orchestrator | 2026-01-01 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:40.608450 | orchestrator | 2026-01-01 02:53:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:40.611830 | orchestrator | 2026-01-01 02:53:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:40.612751 | orchestrator | 2026-01-01 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:43.664757 | orchestrator | 2026-01-01 02:53:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:43.668469 | orchestrator | 2026-01-01 02:53:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:43.668537 | orchestrator | 2026-01-01 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:46.720619 | orchestrator | 2026-01-01 02:53:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:46.722400 | orchestrator | 2026-01-01 02:53:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:46.722499 | orchestrator | 2026-01-01 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:49.767087 | orchestrator | 2026-01-01 02:53:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:49.768629 | orchestrator | 2026-01-01 02:53:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:49.768692 | orchestrator | 2026-01-01 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:52.823712 | orchestrator | 2026-01-01 02:53:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:52.824648 | orchestrator | 2026-01-01 02:53:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:52.824687 | orchestrator | 2026-01-01 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:55.868980 | orchestrator | 2026-01-01 02:53:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:55.870318 | orchestrator | 2026-01-01 02:53:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:55.870366 | orchestrator | 2026-01-01 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:53:58.921177 | orchestrator | 2026-01-01 02:53:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:53:58.922942 | orchestrator | 2026-01-01 02:53:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:53:58.922987 | orchestrator | 2026-01-01 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:01.961972 | orchestrator | 2026-01-01 02:54:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:01.962721 | orchestrator | 2026-01-01 02:54:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:01.962743 | orchestrator | 2026-01-01 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:05.013542 | orchestrator | 2026-01-01 02:54:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:05.015094 | orchestrator | 2026-01-01 02:54:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:05.015138 | orchestrator | 2026-01-01 02:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:08.058929 | orchestrator | 2026-01-01 02:54:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:08.059394 | orchestrator | 2026-01-01 02:54:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:08.059481 | orchestrator | 2026-01-01 02:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:11.099924 | orchestrator | 2026-01-01 02:54:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:11.102186 | orchestrator | 2026-01-01 02:54:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:11.102328 | orchestrator | 2026-01-01 02:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:14.143181 | orchestrator | 2026-01-01 02:54:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:14.145051 | orchestrator | 2026-01-01 02:54:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:14.145125 | orchestrator | 2026-01-01 02:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:17.187209 | orchestrator | 2026-01-01 02:54:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:17.190354 | orchestrator | 2026-01-01 02:54:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:17.190482 | orchestrator | 2026-01-01 02:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:20.237973 | orchestrator | 2026-01-01 02:54:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:20.240085 | orchestrator | 2026-01-01 02:54:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:20.240127 | orchestrator | 2026-01-01 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:23.289912 | orchestrator | 2026-01-01 02:54:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:23.291953 | orchestrator | 2026-01-01 02:54:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:23.291985 | orchestrator | 2026-01-01 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:26.343806 | orchestrator | 2026-01-01 02:54:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:26.346407 | orchestrator | 2026-01-01 02:54:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:26.346441 | orchestrator | 2026-01-01 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:29.398605 | orchestrator | 2026-01-01 02:54:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:29.399462 | orchestrator | 2026-01-01 02:54:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:29.399522 | orchestrator | 2026-01-01 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:32.441910 | orchestrator | 2026-01-01 02:54:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:32.444049 | orchestrator | 2026-01-01 02:54:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:32.444106 | orchestrator | 2026-01-01 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:35.498954 | orchestrator | 2026-01-01 02:54:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:35.502226 | orchestrator | 2026-01-01 02:54:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:35.502279 | orchestrator | 2026-01-01 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:38.550483 | orchestrator | 2026-01-01 02:54:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:38.551689 | orchestrator | 2026-01-01 02:54:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:38.551845 | orchestrator | 2026-01-01 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:41.583772 | orchestrator | 2026-01-01 02:54:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:41.584368 | orchestrator | 2026-01-01 02:54:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:41.584406 | orchestrator | 2026-01-01 02:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:44.630571 | orchestrator | 2026-01-01 02:54:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:44.632512 | orchestrator | 2026-01-01 02:54:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:44.632590 | orchestrator | 2026-01-01 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:47.668835 | orchestrator | 2026-01-01 02:54:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:47.671276 | orchestrator | 2026-01-01 02:54:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:47.671350 | orchestrator | 2026-01-01 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:50.715121 | orchestrator | 2026-01-01 02:54:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:50.715699 | orchestrator | 2026-01-01 02:54:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:50.715729 | orchestrator | 2026-01-01 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:53.764097 | orchestrator | 2026-01-01 02:54:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:53.766286 | orchestrator | 2026-01-01 02:54:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:53.766427 | orchestrator | 2026-01-01 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:56.815233 | orchestrator | 2026-01-01 02:54:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:56.817151 | orchestrator | 2026-01-01 02:54:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:56.817219 | orchestrator | 2026-01-01 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:54:59.862545 | orchestrator | 2026-01-01 02:54:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:54:59.865388 | orchestrator | 2026-01-01 02:54:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:54:59.865435 | orchestrator | 2026-01-01 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:02.912701 | orchestrator | 2026-01-01 02:55:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:02.914667 | orchestrator | 2026-01-01 02:55:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:02.914727 | orchestrator | 2026-01-01 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:05.964707 | orchestrator | 2026-01-01 02:55:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:05.966685 | orchestrator | 2026-01-01 02:55:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:05.966737 | orchestrator | 2026-01-01 02:55:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:09.005181 | orchestrator | 2026-01-01 02:55:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:09.005915 | orchestrator | 2026-01-01 02:55:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:09.005952 | orchestrator | 2026-01-01 02:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:12.045964 | orchestrator | 2026-01-01 02:55:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:12.047090 | orchestrator | 2026-01-01 02:55:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:12.047121 | orchestrator | 2026-01-01 02:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:15.086687 | orchestrator | 2026-01-01 02:55:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:15.087936 | orchestrator | 2026-01-01 02:55:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:15.087988 | orchestrator | 2026-01-01 02:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:18.136291 | orchestrator | 2026-01-01 02:55:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:18.137647 | orchestrator | 2026-01-01 02:55:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:18.137755 | orchestrator | 2026-01-01 02:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:21.177679 | orchestrator | 2026-01-01 02:55:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:21.178518 | orchestrator | 2026-01-01 02:55:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:21.178586 | orchestrator | 2026-01-01 02:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:24.218121 | orchestrator | 2026-01-01 02:55:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:24.219057 | orchestrator | 2026-01-01 02:55:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:24.219105 | orchestrator | 2026-01-01 02:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:27.250976 | orchestrator | 2026-01-01 02:55:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:27.252123 | orchestrator | 2026-01-01 02:55:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:27.252144 | orchestrator | 2026-01-01 02:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:30.298648 | orchestrator | 2026-01-01 02:55:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:30.300620 | orchestrator | 2026-01-01 02:55:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:30.300686 | orchestrator | 2026-01-01 02:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:33.351149 | orchestrator | 2026-01-01 02:55:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:33.354338 | orchestrator | 2026-01-01 02:55:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:33.354442 | orchestrator | 2026-01-01 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:36.398780 | orchestrator | 2026-01-01 02:55:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:36.398889 | orchestrator | 2026-01-01 02:55:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:36.398904 | orchestrator | 2026-01-01 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:39.453499 | orchestrator | 2026-01-01 02:55:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:39.455980 | orchestrator | 2026-01-01 02:55:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:39.456037 | orchestrator | 2026-01-01 02:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:42.505769 | orchestrator | 2026-01-01 02:55:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:42.507983 | orchestrator | 2026-01-01 02:55:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:42.508047 | orchestrator | 2026-01-01 02:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:45.546271 | orchestrator | 2026-01-01 02:55:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:45.546712 | orchestrator | 2026-01-01 02:55:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:45.546752 | orchestrator | 2026-01-01 02:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:48.586931 | orchestrator | 2026-01-01 02:55:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:48.587965 | orchestrator | 2026-01-01 02:55:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:48.588041 | orchestrator | 2026-01-01 02:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:51.635871 | orchestrator | 2026-01-01 02:55:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:51.637639 | orchestrator | 2026-01-01 02:55:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:51.637687 | orchestrator | 2026-01-01 02:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:54.687970 | orchestrator | 2026-01-01 02:55:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:54.688973 | orchestrator | 2026-01-01 02:55:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:54.689018 | orchestrator | 2026-01-01 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:55:57.740894 | orchestrator | 2026-01-01 02:55:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:55:57.743106 | orchestrator | 2026-01-01 02:55:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:55:57.743180 | orchestrator | 2026-01-01 02:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:00.788918 | orchestrator | 2026-01-01 02:56:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:00.790352 | orchestrator | 2026-01-01 02:56:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:00.790567 | orchestrator | 2026-01-01 02:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:03.842527 | orchestrator | 2026-01-01 02:56:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:03.844293 | orchestrator | 2026-01-01 02:56:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:03.844369 | orchestrator | 2026-01-01 02:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:06.885188 | orchestrator | 2026-01-01 02:56:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:06.886389 | orchestrator | 2026-01-01 02:56:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:06.886470 | orchestrator | 2026-01-01 02:56:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:09.943141 | orchestrator | 2026-01-01 02:56:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:09.945192 | orchestrator | 2026-01-01 02:56:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:09.945282 | orchestrator | 2026-01-01 02:56:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:12.989862 | orchestrator | 2026-01-01 02:56:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:12.994344 | orchestrator | 2026-01-01 02:56:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:12.994495 | orchestrator | 2026-01-01 02:56:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:16.044853 | orchestrator | 2026-01-01 02:56:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:16.044946 | orchestrator | 2026-01-01 02:56:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:16.044958 | orchestrator | 2026-01-01 02:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:19.086834 | orchestrator | 2026-01-01 02:56:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:19.087933 | orchestrator | 2026-01-01 02:56:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:19.087977 | orchestrator | 2026-01-01 02:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:22.123731 | orchestrator | 2026-01-01 02:56:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:22.126749 | orchestrator | 2026-01-01 02:56:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:22.126772 | orchestrator | 2026-01-01 02:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:25.180107 | orchestrator | 2026-01-01 02:56:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:25.181786 | orchestrator | 2026-01-01 02:56:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:25.181822 | orchestrator | 2026-01-01 02:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:28.226649 | orchestrator | 2026-01-01 02:56:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:28.228463 | orchestrator | 2026-01-01 02:56:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:28.228523 | orchestrator | 2026-01-01 02:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:31.279180 | orchestrator | 2026-01-01 02:56:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:31.280327 | orchestrator | 2026-01-01 02:56:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:31.280381 | orchestrator | 2026-01-01 02:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:34.329726 | orchestrator | 2026-01-01 02:56:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:34.331017 | orchestrator | 2026-01-01 02:56:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:34.331034 | orchestrator | 2026-01-01 02:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:37.383070 | orchestrator | 2026-01-01 02:56:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:37.385580 | orchestrator | 2026-01-01 02:56:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:37.385622 | orchestrator | 2026-01-01 02:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:40.438338 | orchestrator | 2026-01-01 02:56:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:40.441266 | orchestrator | 2026-01-01 02:56:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:40.441380 | orchestrator | 2026-01-01 02:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:43.484382 | orchestrator | 2026-01-01 02:56:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:43.485326 | orchestrator | 2026-01-01 02:56:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:43.485363 | orchestrator | 2026-01-01 02:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:46.538939 | orchestrator | 2026-01-01 02:56:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:46.540352 | orchestrator | 2026-01-01 02:56:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:46.540414 | orchestrator | 2026-01-01 02:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:49.592734 | orchestrator | 2026-01-01 02:56:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:49.594891 | orchestrator | 2026-01-01 02:56:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:49.594943 | orchestrator | 2026-01-01 02:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:52.638593 | orchestrator | 2026-01-01 02:56:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:52.641138 | orchestrator | 2026-01-01 02:56:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:52.641179 | orchestrator | 2026-01-01 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:55.689225 | orchestrator | 2026-01-01 02:56:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:55.690598 | orchestrator | 2026-01-01 02:56:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:55.690627 | orchestrator | 2026-01-01 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:56:58.734292 | orchestrator | 2026-01-01 02:56:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:56:58.736021 | orchestrator | 2026-01-01 02:56:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:56:58.736067 | orchestrator | 2026-01-01 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:01.779121 | orchestrator | 2026-01-01 02:57:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:01.779965 | orchestrator | 2026-01-01 02:57:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:01.780001 | orchestrator | 2026-01-01 02:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:04.824013 | orchestrator | 2026-01-01 02:57:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:04.825519 | orchestrator | 2026-01-01 02:57:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:04.825561 | orchestrator | 2026-01-01 02:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:07.878115 | orchestrator | 2026-01-01 02:57:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:07.879736 | orchestrator | 2026-01-01 02:57:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:07.879801 | orchestrator | 2026-01-01 02:57:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:10.942551 | orchestrator | 2026-01-01 02:57:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:10.946055 | orchestrator | 2026-01-01 02:57:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:10.946124 | orchestrator | 2026-01-01 02:57:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:13.996076 | orchestrator | 2026-01-01 02:57:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:13.998932 | orchestrator | 2026-01-01 02:57:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:13.999198 | orchestrator | 2026-01-01 02:57:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:17.044239 | orchestrator | 2026-01-01 02:57:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:17.045722 | orchestrator | 2026-01-01 02:57:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:17.045767 | orchestrator | 2026-01-01 02:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:20.097037 | orchestrator | 2026-01-01 02:57:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:20.098737 | orchestrator | 2026-01-01 02:57:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:20.098795 | orchestrator | 2026-01-01 02:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:23.147256 | orchestrator | 2026-01-01 02:57:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:23.148217 | orchestrator | 2026-01-01 02:57:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:23.148237 | orchestrator | 2026-01-01 02:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:26.201395 | orchestrator | 2026-01-01 02:57:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:26.203679 | orchestrator | 2026-01-01 02:57:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:26.203760 | orchestrator | 2026-01-01 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:29.260833 | orchestrator | 2026-01-01 02:57:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:29.262889 | orchestrator | 2026-01-01 02:57:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:29.262951 | orchestrator | 2026-01-01 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:32.309413 | orchestrator | 2026-01-01 02:57:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:32.310995 | orchestrator | 2026-01-01 02:57:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:32.311056 | orchestrator | 2026-01-01 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:35.360724 | orchestrator | 2026-01-01 02:57:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:35.362905 | orchestrator | 2026-01-01 02:57:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:35.362949 | orchestrator | 2026-01-01 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:38.411385 | orchestrator | 2026-01-01 02:57:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:38.414110 | orchestrator | 2026-01-01 02:57:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:38.414191 | orchestrator | 2026-01-01 02:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:41.462306 | orchestrator | 2026-01-01 02:57:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:41.464204 | orchestrator | 2026-01-01 02:57:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:41.464244 | orchestrator | 2026-01-01 02:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:44.507412 | orchestrator | 2026-01-01 02:57:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:44.507742 | orchestrator | 2026-01-01 02:57:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:44.508008 | orchestrator | 2026-01-01 02:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:47.561748 | orchestrator | 2026-01-01 02:57:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:47.562963 | orchestrator | 2026-01-01 02:57:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:47.563008 | orchestrator | 2026-01-01 02:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:50.607859 | orchestrator | 2026-01-01 02:57:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:50.610270 | orchestrator | 2026-01-01 02:57:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:50.610331 | orchestrator | 2026-01-01 02:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:53.662667 | orchestrator | 2026-01-01 02:57:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:53.664283 | orchestrator | 2026-01-01 02:57:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:53.664946 | orchestrator | 2026-01-01 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:56.727951 | orchestrator | 2026-01-01 02:57:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:56.730931 | orchestrator | 2026-01-01 02:57:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:56.731039 | orchestrator | 2026-01-01 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:57:59.783239 | orchestrator | 2026-01-01 02:57:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:57:59.785734 | orchestrator | 2026-01-01 02:57:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:57:59.786141 | orchestrator | 2026-01-01 02:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:02.834283 | orchestrator | 2026-01-01 02:58:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:02.837588 | orchestrator | 2026-01-01 02:58:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:02.837958 | orchestrator | 2026-01-01 02:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:05.888935 | orchestrator | 2026-01-01 02:58:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:05.891905 | orchestrator | 2026-01-01 02:58:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:05.892128 | orchestrator | 2026-01-01 02:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:08.946825 | orchestrator | 2026-01-01 02:58:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:08.947288 | orchestrator | 2026-01-01 02:58:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:08.947473 | orchestrator | 2026-01-01 02:58:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:12.003937 | orchestrator | 2026-01-01 02:58:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:12.008013 | orchestrator | 2026-01-01 02:58:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:12.008108 | orchestrator | 2026-01-01 02:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:15.059218 | orchestrator | 2026-01-01 02:58:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:15.060712 | orchestrator | 2026-01-01 02:58:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:15.060778 | orchestrator | 2026-01-01 02:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:18.100710 | orchestrator | 2026-01-01 02:58:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:18.101996 | orchestrator | 2026-01-01 02:58:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:18.102079 | orchestrator | 2026-01-01 02:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:21.154711 | orchestrator | 2026-01-01 02:58:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:21.155366 | orchestrator | 2026-01-01 02:58:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:21.155403 | orchestrator | 2026-01-01 02:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:24.210009 | orchestrator | 2026-01-01 02:58:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:24.210870 | orchestrator | 2026-01-01 02:58:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:24.210891 | orchestrator | 2026-01-01 02:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:27.266262 | orchestrator | 2026-01-01 02:58:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:27.268835 | orchestrator | 2026-01-01 02:58:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:27.269065 | orchestrator | 2026-01-01 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:30.324772 | orchestrator | 2026-01-01 02:58:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:30.325672 | orchestrator | 2026-01-01 02:58:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:30.325711 | orchestrator | 2026-01-01 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:33.378463 | orchestrator | 2026-01-01 02:58:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:33.380990 | orchestrator | 2026-01-01 02:58:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:33.381183 | orchestrator | 2026-01-01 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:36.422973 | orchestrator | 2026-01-01 02:58:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:36.425391 | orchestrator | 2026-01-01 02:58:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:36.425746 | orchestrator | 2026-01-01 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:39.477363 | orchestrator | 2026-01-01 02:58:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:39.480832 | orchestrator | 2026-01-01 02:58:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:39.480985 | orchestrator | 2026-01-01 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:42.534639 | orchestrator | 2026-01-01 02:58:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:42.535617 | orchestrator | 2026-01-01 02:58:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:42.535659 | orchestrator | 2026-01-01 02:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:45.575840 | orchestrator | 2026-01-01 02:58:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:45.576940 | orchestrator | 2026-01-01 02:58:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:45.576982 | orchestrator | 2026-01-01 02:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:48.631604 | orchestrator | 2026-01-01 02:58:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:48.633780 | orchestrator | 2026-01-01 02:58:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:48.633847 | orchestrator | 2026-01-01 02:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:51.687728 | orchestrator | 2026-01-01 02:58:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:51.689357 | orchestrator | 2026-01-01 02:58:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:51.689442 | orchestrator | 2026-01-01 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:54.738813 | orchestrator | 2026-01-01 02:58:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:54.740292 | orchestrator | 2026-01-01 02:58:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:54.740380 | orchestrator | 2026-01-01 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:58:57.788824 | orchestrator | 2026-01-01 02:58:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:58:57.791126 | orchestrator | 2026-01-01 02:58:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:58:57.791179 | orchestrator | 2026-01-01 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:00.840164 | orchestrator | 2026-01-01 02:59:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:00.841400 | orchestrator | 2026-01-01 02:59:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:00.841453 | orchestrator | 2026-01-01 02:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:03.894106 | orchestrator | 2026-01-01 02:59:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:03.897656 | orchestrator | 2026-01-01 02:59:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:03.897732 | orchestrator | 2026-01-01 02:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:06.947059 | orchestrator | 2026-01-01 02:59:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:06.949294 | orchestrator | 2026-01-01 02:59:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:06.949360 | orchestrator | 2026-01-01 02:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:10.001706 | orchestrator | 2026-01-01 02:59:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:10.003623 | orchestrator | 2026-01-01 02:59:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:10.003768 | orchestrator | 2026-01-01 02:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:13.050640 | orchestrator | 2026-01-01 02:59:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:13.051355 | orchestrator | 2026-01-01 02:59:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:13.051466 | orchestrator | 2026-01-01 02:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:16.102299 | orchestrator | 2026-01-01 02:59:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:16.104303 | orchestrator | 2026-01-01 02:59:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:16.104619 | orchestrator | 2026-01-01 02:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:19.156067 | orchestrator | 2026-01-01 02:59:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:19.158394 | orchestrator | 2026-01-01 02:59:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:19.158457 | orchestrator | 2026-01-01 02:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:22.207409 | orchestrator | 2026-01-01 02:59:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:22.208846 | orchestrator | 2026-01-01 02:59:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:22.209235 | orchestrator | 2026-01-01 02:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:25.255940 | orchestrator | 2026-01-01 02:59:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:25.257278 | orchestrator | 2026-01-01 02:59:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:25.257357 | orchestrator | 2026-01-01 02:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:28.314221 | orchestrator | 2026-01-01 02:59:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:28.317072 | orchestrator | 2026-01-01 02:59:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:28.317154 | orchestrator | 2026-01-01 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:31.360639 | orchestrator | 2026-01-01 02:59:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:31.362143 | orchestrator | 2026-01-01 02:59:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:31.362191 | orchestrator | 2026-01-01 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:34.412257 | orchestrator | 2026-01-01 02:59:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:34.416172 | orchestrator | 2026-01-01 02:59:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:34.416216 | orchestrator | 2026-01-01 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:37.452127 | orchestrator | 2026-01-01 02:59:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:37.453150 | orchestrator | 2026-01-01 02:59:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:37.453185 | orchestrator | 2026-01-01 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:40.500655 | orchestrator | 2026-01-01 02:59:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:40.501514 | orchestrator | 2026-01-01 02:59:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:40.501801 | orchestrator | 2026-01-01 02:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:43.555212 | orchestrator | 2026-01-01 02:59:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:43.556819 | orchestrator | 2026-01-01 02:59:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:43.556868 | orchestrator | 2026-01-01 02:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:46.600719 | orchestrator | 2026-01-01 02:59:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:46.604029 | orchestrator | 2026-01-01 02:59:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:46.604109 | orchestrator | 2026-01-01 02:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:49.650791 | orchestrator | 2026-01-01 02:59:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:49.652608 | orchestrator | 2026-01-01 02:59:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:49.652675 | orchestrator | 2026-01-01 02:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:52.701419 | orchestrator | 2026-01-01 02:59:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:52.702517 | orchestrator | 2026-01-01 02:59:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:52.702727 | orchestrator | 2026-01-01 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:55.750920 | orchestrator | 2026-01-01 02:59:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:55.752541 | orchestrator | 2026-01-01 02:59:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:55.753170 | orchestrator | 2026-01-01 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 02:59:58.804471 | orchestrator | 2026-01-01 02:59:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 02:59:58.806702 | orchestrator | 2026-01-01 02:59:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 02:59:58.806752 | orchestrator | 2026-01-01 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:01.861011 | orchestrator | 2026-01-01 03:00:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:01.865363 | orchestrator | 2026-01-01 03:00:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:01.865441 | orchestrator | 2026-01-01 03:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:04.910387 | orchestrator | 2026-01-01 03:00:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:04.912382 | orchestrator | 2026-01-01 03:00:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:04.912432 | orchestrator | 2026-01-01 03:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:07.962274 | orchestrator | 2026-01-01 03:00:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:07.964913 | orchestrator | 2026-01-01 03:00:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:07.965046 | orchestrator | 2026-01-01 03:00:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:11.014520 | orchestrator | 2026-01-01 03:00:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:11.015684 | orchestrator | 2026-01-01 03:00:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:11.015740 | orchestrator | 2026-01-01 03:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:14.067089 | orchestrator | 2026-01-01 03:00:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:14.069748 | orchestrator | 2026-01-01 03:00:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:14.069804 | orchestrator | 2026-01-01 03:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:17.109911 | orchestrator | 2026-01-01 03:00:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:17.112354 | orchestrator | 2026-01-01 03:00:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:17.112408 | orchestrator | 2026-01-01 03:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:20.160976 | orchestrator | 2026-01-01 03:00:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:20.162986 | orchestrator | 2026-01-01 03:00:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:20.163065 | orchestrator | 2026-01-01 03:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:23.194219 | orchestrator | 2026-01-01 03:00:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:23.194551 | orchestrator | 2026-01-01 03:00:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:23.194653 | orchestrator | 2026-01-01 03:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:26.246987 | orchestrator | 2026-01-01 03:00:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:26.248488 | orchestrator | 2026-01-01 03:00:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:26.248527 | orchestrator | 2026-01-01 03:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:29.305024 | orchestrator | 2026-01-01 03:00:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:29.306181 | orchestrator | 2026-01-01 03:00:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:29.306654 | orchestrator | 2026-01-01 03:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:32.356019 | orchestrator | 2026-01-01 03:00:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:32.357337 | orchestrator | 2026-01-01 03:00:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:32.357386 | orchestrator | 2026-01-01 03:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:35.404966 | orchestrator | 2026-01-01 03:00:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:35.406831 | orchestrator | 2026-01-01 03:00:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:35.406894 | orchestrator | 2026-01-01 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:38.451671 | orchestrator | 2026-01-01 03:00:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:38.453932 | orchestrator | 2026-01-01 03:00:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:38.454007 | orchestrator | 2026-01-01 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:41.494509 | orchestrator | 2026-01-01 03:00:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:41.495968 | orchestrator | 2026-01-01 03:00:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:41.496398 | orchestrator | 2026-01-01 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:44.548603 | orchestrator | 2026-01-01 03:00:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:44.549539 | orchestrator | 2026-01-01 03:00:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:44.549802 | orchestrator | 2026-01-01 03:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:47.593415 | orchestrator | 2026-01-01 03:00:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:47.597257 | orchestrator | 2026-01-01 03:00:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:47.597888 | orchestrator | 2026-01-01 03:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:50.646915 | orchestrator | 2026-01-01 03:00:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:50.647689 | orchestrator | 2026-01-01 03:00:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:50.647835 | orchestrator | 2026-01-01 03:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:53.692155 | orchestrator | 2026-01-01 03:00:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:53.693893 | orchestrator | 2026-01-01 03:00:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:53.693941 | orchestrator | 2026-01-01 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:56.745006 | orchestrator | 2026-01-01 03:00:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:56.747234 | orchestrator | 2026-01-01 03:00:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:56.747300 | orchestrator | 2026-01-01 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:00:59.801679 | orchestrator | 2026-01-01 03:00:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:00:59.802843 | orchestrator | 2026-01-01 03:00:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:00:59.802880 | orchestrator | 2026-01-01 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:02.847201 | orchestrator | 2026-01-01 03:01:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:02.847316 | orchestrator | 2026-01-01 03:01:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:02.847327 | orchestrator | 2026-01-01 03:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:05.907509 | orchestrator | 2026-01-01 03:01:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:05.908212 | orchestrator | 2026-01-01 03:01:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:05.908251 | orchestrator | 2026-01-01 03:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:08.965077 | orchestrator | 2026-01-01 03:01:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:08.968030 | orchestrator | 2026-01-01 03:01:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:08.968124 | orchestrator | 2026-01-01 03:01:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:12.023195 | orchestrator | 2026-01-01 03:01:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:12.025827 | orchestrator | 2026-01-01 03:01:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:12.025880 | orchestrator | 2026-01-01 03:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:15.072647 | orchestrator | 2026-01-01 03:01:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:15.074442 | orchestrator | 2026-01-01 03:01:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:15.074500 | orchestrator | 2026-01-01 03:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:18.121813 | orchestrator | 2026-01-01 03:01:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:18.124642 | orchestrator | 2026-01-01 03:01:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:18.124701 | orchestrator | 2026-01-01 03:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:21.169040 | orchestrator | 2026-01-01 03:01:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:21.172437 | orchestrator | 2026-01-01 03:01:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:21.172738 | orchestrator | 2026-01-01 03:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:24.215551 | orchestrator | 2026-01-01 03:01:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:24.217063 | orchestrator | 2026-01-01 03:01:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:24.217219 | orchestrator | 2026-01-01 03:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:27.262533 | orchestrator | 2026-01-01 03:01:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:27.265548 | orchestrator | 2026-01-01 03:01:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:27.265688 | orchestrator | 2026-01-01 03:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:30.321545 | orchestrator | 2026-01-01 03:01:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:30.325171 | orchestrator | 2026-01-01 03:01:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:30.325334 | orchestrator | 2026-01-01 03:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:33.376056 | orchestrator | 2026-01-01 03:01:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:33.377932 | orchestrator | 2026-01-01 03:01:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:33.377948 | orchestrator | 2026-01-01 03:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:36.423264 | orchestrator | 2026-01-01 03:01:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:36.425032 | orchestrator | 2026-01-01 03:01:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:36.425082 | orchestrator | 2026-01-01 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:39.471291 | orchestrator | 2026-01-01 03:01:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:39.474431 | orchestrator | 2026-01-01 03:01:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:39.474495 | orchestrator | 2026-01-01 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:42.521131 | orchestrator | 2026-01-01 03:01:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:42.523047 | orchestrator | 2026-01-01 03:01:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:42.523096 | orchestrator | 2026-01-01 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:45.570691 | orchestrator | 2026-01-01 03:01:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:45.571975 | orchestrator | 2026-01-01 03:01:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:45.572117 | orchestrator | 2026-01-01 03:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:48.620929 | orchestrator | 2026-01-01 03:01:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:48.623302 | orchestrator | 2026-01-01 03:01:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:48.623355 | orchestrator | 2026-01-01 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:51.670556 | orchestrator | 2026-01-01 03:01:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:51.672242 | orchestrator | 2026-01-01 03:01:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:51.672357 | orchestrator | 2026-01-01 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:54.726541 | orchestrator | 2026-01-01 03:01:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:54.729158 | orchestrator | 2026-01-01 03:01:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:54.729215 | orchestrator | 2026-01-01 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:01:57.778090 | orchestrator | 2026-01-01 03:01:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:01:57.780458 | orchestrator | 2026-01-01 03:01:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:01:57.780486 | orchestrator | 2026-01-01 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:00.831188 | orchestrator | 2026-01-01 03:02:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:00.833913 | orchestrator | 2026-01-01 03:02:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:00.834153 | orchestrator | 2026-01-01 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:03.893461 | orchestrator | 2026-01-01 03:02:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:03.895926 | orchestrator | 2026-01-01 03:02:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:03.895976 | orchestrator | 2026-01-01 03:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:06.948952 | orchestrator | 2026-01-01 03:02:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:06.951891 | orchestrator | 2026-01-01 03:02:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:06.952030 | orchestrator | 2026-01-01 03:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:09.997918 | orchestrator | 2026-01-01 03:02:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:09.999266 | orchestrator | 2026-01-01 03:02:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:09.999310 | orchestrator | 2026-01-01 03:02:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:13.051295 | orchestrator | 2026-01-01 03:02:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:13.053076 | orchestrator | 2026-01-01 03:02:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:13.053123 | orchestrator | 2026-01-01 03:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:16.103115 | orchestrator | 2026-01-01 03:02:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:16.103699 | orchestrator | 2026-01-01 03:02:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:16.103735 | orchestrator | 2026-01-01 03:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:19.152216 | orchestrator | 2026-01-01 03:02:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:19.153407 | orchestrator | 2026-01-01 03:02:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:19.153432 | orchestrator | 2026-01-01 03:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:22.209891 | orchestrator | 2026-01-01 03:02:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:22.215878 | orchestrator | 2026-01-01 03:02:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:22.216027 | orchestrator | 2026-01-01 03:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:25.266276 | orchestrator | 2026-01-01 03:02:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:25.266966 | orchestrator | 2026-01-01 03:02:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:25.267306 | orchestrator | 2026-01-01 03:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:28.313283 | orchestrator | 2026-01-01 03:02:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:28.314968 | orchestrator | 2026-01-01 03:02:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:28.315018 | orchestrator | 2026-01-01 03:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:31.363736 | orchestrator | 2026-01-01 03:02:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:31.365776 | orchestrator | 2026-01-01 03:02:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:31.365828 | orchestrator | 2026-01-01 03:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:34.411466 | orchestrator | 2026-01-01 03:02:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:34.414239 | orchestrator | 2026-01-01 03:02:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:34.414282 | orchestrator | 2026-01-01 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:37.459767 | orchestrator | 2026-01-01 03:02:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:37.462165 | orchestrator | 2026-01-01 03:02:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:37.462250 | orchestrator | 2026-01-01 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:40.507426 | orchestrator | 2026-01-01 03:02:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:40.508086 | orchestrator | 2026-01-01 03:02:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:40.508132 | orchestrator | 2026-01-01 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:43.557882 | orchestrator | 2026-01-01 03:02:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:43.559419 | orchestrator | 2026-01-01 03:02:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:43.559474 | orchestrator | 2026-01-01 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:46.603082 | orchestrator | 2026-01-01 03:02:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:46.604489 | orchestrator | 2026-01-01 03:02:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:46.604533 | orchestrator | 2026-01-01 03:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:49.662122 | orchestrator | 2026-01-01 03:02:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:49.664003 | orchestrator | 2026-01-01 03:02:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:49.664140 | orchestrator | 2026-01-01 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:52.712650 | orchestrator | 2026-01-01 03:02:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:52.714310 | orchestrator | 2026-01-01 03:02:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:52.714362 | orchestrator | 2026-01-01 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:55.766471 | orchestrator | 2026-01-01 03:02:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:55.768269 | orchestrator | 2026-01-01 03:02:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:55.768317 | orchestrator | 2026-01-01 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:02:58.821837 | orchestrator | 2026-01-01 03:02:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:02:58.823586 | orchestrator | 2026-01-01 03:02:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:02:58.823706 | orchestrator | 2026-01-01 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:01.870727 | orchestrator | 2026-01-01 03:03:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:01.871377 | orchestrator | 2026-01-01 03:03:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:01.871553 | orchestrator | 2026-01-01 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:04.917964 | orchestrator | 2026-01-01 03:03:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:04.918696 | orchestrator | 2026-01-01 03:03:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:04.918735 | orchestrator | 2026-01-01 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:07.973549 | orchestrator | 2026-01-01 03:03:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:07.974780 | orchestrator | 2026-01-01 03:03:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:07.974818 | orchestrator | 2026-01-01 03:03:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:11.025718 | orchestrator | 2026-01-01 03:03:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:11.025811 | orchestrator | 2026-01-01 03:03:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:11.025825 | orchestrator | 2026-01-01 03:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:14.067572 | orchestrator | 2026-01-01 03:03:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:14.069180 | orchestrator | 2026-01-01 03:03:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:14.069226 | orchestrator | 2026-01-01 03:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:17.109781 | orchestrator | 2026-01-01 03:03:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:17.111582 | orchestrator | 2026-01-01 03:03:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:17.111651 | orchestrator | 2026-01-01 03:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:20.161087 | orchestrator | 2026-01-01 03:03:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:20.162171 | orchestrator | 2026-01-01 03:03:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:20.162211 | orchestrator | 2026-01-01 03:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:23.207942 | orchestrator | 2026-01-01 03:03:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:23.210874 | orchestrator | 2026-01-01 03:03:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:23.210938 | orchestrator | 2026-01-01 03:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:26.254575 | orchestrator | 2026-01-01 03:03:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:26.256786 | orchestrator | 2026-01-01 03:03:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:26.256835 | orchestrator | 2026-01-01 03:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:29.300676 | orchestrator | 2026-01-01 03:03:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:29.306258 | orchestrator | 2026-01-01 03:03:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:29.306354 | orchestrator | 2026-01-01 03:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:32.350896 | orchestrator | 2026-01-01 03:03:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:32.352861 | orchestrator | 2026-01-01 03:03:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:32.352936 | orchestrator | 2026-01-01 03:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:35.394927 | orchestrator | 2026-01-01 03:03:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:35.395043 | orchestrator | 2026-01-01 03:03:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:35.395140 | orchestrator | 2026-01-01 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:38.460216 | orchestrator | 2026-01-01 03:03:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:38.462878 | orchestrator | 2026-01-01 03:03:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:38.462918 | orchestrator | 2026-01-01 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:41.518208 | orchestrator | 2026-01-01 03:03:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:41.519705 | orchestrator | 2026-01-01 03:03:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:41.519754 | orchestrator | 2026-01-01 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:44.571072 | orchestrator | 2026-01-01 03:03:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:44.572373 | orchestrator | 2026-01-01 03:03:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:44.572409 | orchestrator | 2026-01-01 03:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:47.611480 | orchestrator | 2026-01-01 03:03:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:47.612414 | orchestrator | 2026-01-01 03:03:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:47.612456 | orchestrator | 2026-01-01 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:50.661057 | orchestrator | 2026-01-01 03:03:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:50.663301 | orchestrator | 2026-01-01 03:03:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:50.663404 | orchestrator | 2026-01-01 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:53.712411 | orchestrator | 2026-01-01 03:03:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:53.714168 | orchestrator | 2026-01-01 03:03:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:53.714237 | orchestrator | 2026-01-01 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:56.756997 | orchestrator | 2026-01-01 03:03:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:56.759047 | orchestrator | 2026-01-01 03:03:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:56.759126 | orchestrator | 2026-01-01 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:03:59.812359 | orchestrator | 2026-01-01 03:03:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:03:59.814260 | orchestrator | 2026-01-01 03:03:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:03:59.814307 | orchestrator | 2026-01-01 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:02.861151 | orchestrator | 2026-01-01 03:04:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:02.862875 | orchestrator | 2026-01-01 03:04:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:02.862923 | orchestrator | 2026-01-01 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:05.899675 | orchestrator | 2026-01-01 03:04:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:05.900095 | orchestrator | 2026-01-01 03:04:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:05.900206 | orchestrator | 2026-01-01 03:04:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:08.948520 | orchestrator | 2026-01-01 03:04:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:08.949982 | orchestrator | 2026-01-01 03:04:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:08.950300 | orchestrator | 2026-01-01 03:04:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:11.999683 | orchestrator | 2026-01-01 03:04:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:11.999847 | orchestrator | 2026-01-01 03:04:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:11.999938 | orchestrator | 2026-01-01 03:04:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:15.052062 | orchestrator | 2026-01-01 03:04:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:15.054970 | orchestrator | 2026-01-01 03:04:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:15.055048 | orchestrator | 2026-01-01 03:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:18.099541 | orchestrator | 2026-01-01 03:04:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:18.100851 | orchestrator | 2026-01-01 03:04:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:18.101027 | orchestrator | 2026-01-01 03:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:21.155392 | orchestrator | 2026-01-01 03:04:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:21.157993 | orchestrator | 2026-01-01 03:04:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:21.158083 | orchestrator | 2026-01-01 03:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:24.206310 | orchestrator | 2026-01-01 03:04:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:24.207393 | orchestrator | 2026-01-01 03:04:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:24.207899 | orchestrator | 2026-01-01 03:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:27.250132 | orchestrator | 2026-01-01 03:04:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:27.251562 | orchestrator | 2026-01-01 03:04:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:27.251664 | orchestrator | 2026-01-01 03:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:30.298172 | orchestrator | 2026-01-01 03:04:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:30.298301 | orchestrator | 2026-01-01 03:04:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:30.298329 | orchestrator | 2026-01-01 03:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:33.348008 | orchestrator | 2026-01-01 03:04:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:33.349150 | orchestrator | 2026-01-01 03:04:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:33.349184 | orchestrator | 2026-01-01 03:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:36.395139 | orchestrator | 2026-01-01 03:04:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:36.397182 | orchestrator | 2026-01-01 03:04:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:36.397261 | orchestrator | 2026-01-01 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:39.439798 | orchestrator | 2026-01-01 03:04:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:39.443108 | orchestrator | 2026-01-01 03:04:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:39.443254 | orchestrator | 2026-01-01 03:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:42.493932 | orchestrator | 2026-01-01 03:04:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:42.495769 | orchestrator | 2026-01-01 03:04:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:42.496006 | orchestrator | 2026-01-01 03:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:45.539931 | orchestrator | 2026-01-01 03:04:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:45.543750 | orchestrator | 2026-01-01 03:04:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:45.543821 | orchestrator | 2026-01-01 03:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:48.587064 | orchestrator | 2026-01-01 03:04:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:48.589419 | orchestrator | 2026-01-01 03:04:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:48.589473 | orchestrator | 2026-01-01 03:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:51.634452 | orchestrator | 2026-01-01 03:04:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:51.636532 | orchestrator | 2026-01-01 03:04:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:51.636615 | orchestrator | 2026-01-01 03:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:54.686209 | orchestrator | 2026-01-01 03:04:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:54.688558 | orchestrator | 2026-01-01 03:04:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:54.688923 | orchestrator | 2026-01-01 03:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:04:57.742857 | orchestrator | 2026-01-01 03:04:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:04:57.745163 | orchestrator | 2026-01-01 03:04:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:04:57.745232 | orchestrator | 2026-01-01 03:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:00.796755 | orchestrator | 2026-01-01 03:05:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:00.796984 | orchestrator | 2026-01-01 03:05:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:00.797381 | orchestrator | 2026-01-01 03:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:03.859144 | orchestrator | 2026-01-01 03:05:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:03.860582 | orchestrator | 2026-01-01 03:05:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:03.860712 | orchestrator | 2026-01-01 03:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:06.908186 | orchestrator | 2026-01-01 03:05:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:06.911748 | orchestrator | 2026-01-01 03:05:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:06.911829 | orchestrator | 2026-01-01 03:05:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:09.964979 | orchestrator | 2026-01-01 03:05:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:09.967498 | orchestrator | 2026-01-01 03:05:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:09.967555 | orchestrator | 2026-01-01 03:05:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:13.010279 | orchestrator | 2026-01-01 03:05:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:13.012147 | orchestrator | 2026-01-01 03:05:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:13.012293 | orchestrator | 2026-01-01 03:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:16.056870 | orchestrator | 2026-01-01 03:05:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:16.062480 | orchestrator | 2026-01-01 03:05:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:16.062567 | orchestrator | 2026-01-01 03:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:19.102862 | orchestrator | 2026-01-01 03:05:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:19.103281 | orchestrator | 2026-01-01 03:05:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:19.103349 | orchestrator | 2026-01-01 03:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:22.149755 | orchestrator | 2026-01-01 03:05:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:22.152422 | orchestrator | 2026-01-01 03:05:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:22.152453 | orchestrator | 2026-01-01 03:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:25.208016 | orchestrator | 2026-01-01 03:05:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:25.210168 | orchestrator | 2026-01-01 03:05:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:25.210219 | orchestrator | 2026-01-01 03:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:28.261144 | orchestrator | 2026-01-01 03:05:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:28.263984 | orchestrator | 2026-01-01 03:05:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:28.264020 | orchestrator | 2026-01-01 03:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:31.317988 | orchestrator | 2026-01-01 03:05:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:31.319972 | orchestrator | 2026-01-01 03:05:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:31.320028 | orchestrator | 2026-01-01 03:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:34.369969 | orchestrator | 2026-01-01 03:05:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:34.371404 | orchestrator | 2026-01-01 03:05:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:34.371819 | orchestrator | 2026-01-01 03:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:37.423719 | orchestrator | 2026-01-01 03:05:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:37.424069 | orchestrator | 2026-01-01 03:05:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:37.424105 | orchestrator | 2026-01-01 03:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:40.475198 | orchestrator | 2026-01-01 03:05:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:40.475292 | orchestrator | 2026-01-01 03:05:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:40.475308 | orchestrator | 2026-01-01 03:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:43.529756 | orchestrator | 2026-01-01 03:05:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:43.530262 | orchestrator | 2026-01-01 03:05:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:43.530298 | orchestrator | 2026-01-01 03:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:46.577468 | orchestrator | 2026-01-01 03:05:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:46.579470 | orchestrator | 2026-01-01 03:05:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:46.579565 | orchestrator | 2026-01-01 03:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:49.625018 | orchestrator | 2026-01-01 03:05:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:49.625596 | orchestrator | 2026-01-01 03:05:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:49.625692 | orchestrator | 2026-01-01 03:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:52.677656 | orchestrator | 2026-01-01 03:05:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:52.679229 | orchestrator | 2026-01-01 03:05:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:52.679279 | orchestrator | 2026-01-01 03:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:55.731729 | orchestrator | 2026-01-01 03:05:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:55.732718 | orchestrator | 2026-01-01 03:05:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:55.732772 | orchestrator | 2026-01-01 03:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:05:58.778833 | orchestrator | 2026-01-01 03:05:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:05:58.780149 | orchestrator | 2026-01-01 03:05:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:05:58.780262 | orchestrator | 2026-01-01 03:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:01.826364 | orchestrator | 2026-01-01 03:06:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:01.828054 | orchestrator | 2026-01-01 03:06:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:01.828079 | orchestrator | 2026-01-01 03:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:04.870274 | orchestrator | 2026-01-01 03:06:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:04.872332 | orchestrator | 2026-01-01 03:06:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:04.872375 | orchestrator | 2026-01-01 03:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:07.917751 | orchestrator | 2026-01-01 03:06:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:07.918692 | orchestrator | 2026-01-01 03:06:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:07.918744 | orchestrator | 2026-01-01 03:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:10.970401 | orchestrator | 2026-01-01 03:06:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:10.972864 | orchestrator | 2026-01-01 03:06:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:10.972946 | orchestrator | 2026-01-01 03:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:14.014585 | orchestrator | 2026-01-01 03:06:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:14.014717 | orchestrator | 2026-01-01 03:06:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:14.014787 | orchestrator | 2026-01-01 03:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:17.061733 | orchestrator | 2026-01-01 03:06:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:17.063580 | orchestrator | 2026-01-01 03:06:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:17.063701 | orchestrator | 2026-01-01 03:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:20.112800 | orchestrator | 2026-01-01 03:06:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:20.114177 | orchestrator | 2026-01-01 03:06:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:20.114235 | orchestrator | 2026-01-01 03:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:23.154977 | orchestrator | 2026-01-01 03:06:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:23.156531 | orchestrator | 2026-01-01 03:06:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:23.156563 | orchestrator | 2026-01-01 03:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:26.201331 | orchestrator | 2026-01-01 03:06:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:26.203935 | orchestrator | 2026-01-01 03:06:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:26.203981 | orchestrator | 2026-01-01 03:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:29.245715 | orchestrator | 2026-01-01 03:06:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:29.246295 | orchestrator | 2026-01-01 03:06:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:29.246419 | orchestrator | 2026-01-01 03:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:32.283837 | orchestrator | 2026-01-01 03:06:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:32.284437 | orchestrator | 2026-01-01 03:06:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:32.284471 | orchestrator | 2026-01-01 03:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:35.329561 | orchestrator | 2026-01-01 03:06:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:35.330968 | orchestrator | 2026-01-01 03:06:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:35.331010 | orchestrator | 2026-01-01 03:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:38.374777 | orchestrator | 2026-01-01 03:06:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:38.376927 | orchestrator | 2026-01-01 03:06:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:38.376991 | orchestrator | 2026-01-01 03:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:41.416891 | orchestrator | 2026-01-01 03:06:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:41.419162 | orchestrator | 2026-01-01 03:06:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:41.419211 | orchestrator | 2026-01-01 03:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:44.473306 | orchestrator | 2026-01-01 03:06:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:44.476132 | orchestrator | 2026-01-01 03:06:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:44.476167 | orchestrator | 2026-01-01 03:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:47.523094 | orchestrator | 2026-01-01 03:06:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:47.525205 | orchestrator | 2026-01-01 03:06:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:47.525289 | orchestrator | 2026-01-01 03:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:50.574985 | orchestrator | 2026-01-01 03:06:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:50.575607 | orchestrator | 2026-01-01 03:06:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:50.575964 | orchestrator | 2026-01-01 03:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:53.628502 | orchestrator | 2026-01-01 03:06:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:53.630116 | orchestrator | 2026-01-01 03:06:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:53.630167 | orchestrator | 2026-01-01 03:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:56.676736 | orchestrator | 2026-01-01 03:06:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:56.679666 | orchestrator | 2026-01-01 03:06:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:56.679727 | orchestrator | 2026-01-01 03:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:06:59.736773 | orchestrator | 2026-01-01 03:06:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:06:59.739671 | orchestrator | 2026-01-01 03:06:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:06:59.739712 | orchestrator | 2026-01-01 03:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:02.784139 | orchestrator | 2026-01-01 03:07:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:02.786128 | orchestrator | 2026-01-01 03:07:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:02.786168 | orchestrator | 2026-01-01 03:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:05.834994 | orchestrator | 2026-01-01 03:07:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:05.839740 | orchestrator | 2026-01-01 03:07:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:05.839868 | orchestrator | 2026-01-01 03:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:08.897149 | orchestrator | 2026-01-01 03:07:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:08.898518 | orchestrator | 2026-01-01 03:07:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:08.898702 | orchestrator | 2026-01-01 03:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:11.954073 | orchestrator | 2026-01-01 03:07:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:11.955803 | orchestrator | 2026-01-01 03:07:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:11.955868 | orchestrator | 2026-01-01 03:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:15.009998 | orchestrator | 2026-01-01 03:07:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:15.010297 | orchestrator | 2026-01-01 03:07:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:15.010325 | orchestrator | 2026-01-01 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:18.058109 | orchestrator | 2026-01-01 03:07:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:18.059600 | orchestrator | 2026-01-01 03:07:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:18.059704 | orchestrator | 2026-01-01 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:21.107636 | orchestrator | 2026-01-01 03:07:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:21.110121 | orchestrator | 2026-01-01 03:07:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:21.110189 | orchestrator | 2026-01-01 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:24.167278 | orchestrator | 2026-01-01 03:07:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:24.170131 | orchestrator | 2026-01-01 03:07:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:24.170194 | orchestrator | 2026-01-01 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:27.221217 | orchestrator | 2026-01-01 03:07:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:27.224645 | orchestrator | 2026-01-01 03:07:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:27.224859 | orchestrator | 2026-01-01 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:30.275540 | orchestrator | 2026-01-01 03:07:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:30.278654 | orchestrator | 2026-01-01 03:07:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:30.278711 | orchestrator | 2026-01-01 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:33.332246 | orchestrator | 2026-01-01 03:07:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:33.332497 | orchestrator | 2026-01-01 03:07:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:33.332529 | orchestrator | 2026-01-01 03:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:36.384079 | orchestrator | 2026-01-01 03:07:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:36.385409 | orchestrator | 2026-01-01 03:07:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:36.385446 | orchestrator | 2026-01-01 03:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:39.437929 | orchestrator | 2026-01-01 03:07:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:39.439477 | orchestrator | 2026-01-01 03:07:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:39.439529 | orchestrator | 2026-01-01 03:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:42.489875 | orchestrator | 2026-01-01 03:07:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:42.494563 | orchestrator | 2026-01-01 03:07:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:42.494651 | orchestrator | 2026-01-01 03:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:45.546312 | orchestrator | 2026-01-01 03:07:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:45.547123 | orchestrator | 2026-01-01 03:07:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:45.547210 | orchestrator | 2026-01-01 03:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:48.604739 | orchestrator | 2026-01-01 03:07:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:48.605696 | orchestrator | 2026-01-01 03:07:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:48.605746 | orchestrator | 2026-01-01 03:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:51.651456 | orchestrator | 2026-01-01 03:07:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:51.652489 | orchestrator | 2026-01-01 03:07:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:51.652552 | orchestrator | 2026-01-01 03:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:54.704188 | orchestrator | 2026-01-01 03:07:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:54.706674 | orchestrator | 2026-01-01 03:07:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:54.706767 | orchestrator | 2026-01-01 03:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:07:57.755655 | orchestrator | 2026-01-01 03:07:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:07:57.758655 | orchestrator | 2026-01-01 03:07:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:07:57.758716 | orchestrator | 2026-01-01 03:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:00.809791 | orchestrator | 2026-01-01 03:08:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:00.810787 | orchestrator | 2026-01-01 03:08:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:00.810825 | orchestrator | 2026-01-01 03:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:03.865672 | orchestrator | 2026-01-01 03:08:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:03.866453 | orchestrator | 2026-01-01 03:08:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:03.866489 | orchestrator | 2026-01-01 03:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:06.919411 | orchestrator | 2026-01-01 03:08:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:06.922119 | orchestrator | 2026-01-01 03:08:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:06.922413 | orchestrator | 2026-01-01 03:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:09.983646 | orchestrator | 2026-01-01 03:08:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:09.986273 | orchestrator | 2026-01-01 03:08:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:09.986324 | orchestrator | 2026-01-01 03:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:13.034492 | orchestrator | 2026-01-01 03:08:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:13.036764 | orchestrator | 2026-01-01 03:08:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:13.036819 | orchestrator | 2026-01-01 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:16.086479 | orchestrator | 2026-01-01 03:08:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:16.087978 | orchestrator | 2026-01-01 03:08:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:16.088216 | orchestrator | 2026-01-01 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:19.141836 | orchestrator | 2026-01-01 03:08:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:19.144573 | orchestrator | 2026-01-01 03:08:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:19.144705 | orchestrator | 2026-01-01 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:22.191226 | orchestrator | 2026-01-01 03:08:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:22.192690 | orchestrator | 2026-01-01 03:08:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:22.192764 | orchestrator | 2026-01-01 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:25.243492 | orchestrator | 2026-01-01 03:08:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:25.246147 | orchestrator | 2026-01-01 03:08:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:25.246207 | orchestrator | 2026-01-01 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:28.309708 | orchestrator | 2026-01-01 03:08:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:28.311949 | orchestrator | 2026-01-01 03:08:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:28.312016 | orchestrator | 2026-01-01 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:31.365694 | orchestrator | 2026-01-01 03:08:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:31.367789 | orchestrator | 2026-01-01 03:08:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:31.367826 | orchestrator | 2026-01-01 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:34.418439 | orchestrator | 2026-01-01 03:08:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:34.421410 | orchestrator | 2026-01-01 03:08:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:34.421784 | orchestrator | 2026-01-01 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:37.478630 | orchestrator | 2026-01-01 03:08:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:37.480107 | orchestrator | 2026-01-01 03:08:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:37.480143 | orchestrator | 2026-01-01 03:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:40.530947 | orchestrator | 2026-01-01 03:08:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:40.532531 | orchestrator | 2026-01-01 03:08:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:40.532561 | orchestrator | 2026-01-01 03:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:43.578956 | orchestrator | 2026-01-01 03:08:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:43.581456 | orchestrator | 2026-01-01 03:08:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:43.581510 | orchestrator | 2026-01-01 03:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:46.629140 | orchestrator | 2026-01-01 03:08:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:46.631292 | orchestrator | 2026-01-01 03:08:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:46.631337 | orchestrator | 2026-01-01 03:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:49.688991 | orchestrator | 2026-01-01 03:08:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:49.691329 | orchestrator | 2026-01-01 03:08:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:49.691415 | orchestrator | 2026-01-01 03:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:52.750066 | orchestrator | 2026-01-01 03:08:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:52.751237 | orchestrator | 2026-01-01 03:08:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:52.751322 | orchestrator | 2026-01-01 03:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:55.799693 | orchestrator | 2026-01-01 03:08:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:55.800896 | orchestrator | 2026-01-01 03:08:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:55.800947 | orchestrator | 2026-01-01 03:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:08:58.853830 | orchestrator | 2026-01-01 03:08:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:08:58.855813 | orchestrator | 2026-01-01 03:08:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:08:58.855853 | orchestrator | 2026-01-01 03:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:01.896094 | orchestrator | 2026-01-01 03:09:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:01.897701 | orchestrator | 2026-01-01 03:09:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:01.897760 | orchestrator | 2026-01-01 03:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:04.937180 | orchestrator | 2026-01-01 03:09:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:04.939455 | orchestrator | 2026-01-01 03:09:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:04.939631 | orchestrator | 2026-01-01 03:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:07.992224 | orchestrator | 2026-01-01 03:09:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:07.995472 | orchestrator | 2026-01-01 03:09:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:07.995784 | orchestrator | 2026-01-01 03:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:11.056850 | orchestrator | 2026-01-01 03:09:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:11.059743 | orchestrator | 2026-01-01 03:09:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:11.059808 | orchestrator | 2026-01-01 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:14.107564 | orchestrator | 2026-01-01 03:09:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:14.108747 | orchestrator | 2026-01-01 03:09:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:14.108833 | orchestrator | 2026-01-01 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:17.155129 | orchestrator | 2026-01-01 03:09:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:17.156002 | orchestrator | 2026-01-01 03:09:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:17.156064 | orchestrator | 2026-01-01 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:20.211701 | orchestrator | 2026-01-01 03:09:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:20.212622 | orchestrator | 2026-01-01 03:09:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:20.212759 | orchestrator | 2026-01-01 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:23.265906 | orchestrator | 2026-01-01 03:09:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:23.267285 | orchestrator | 2026-01-01 03:09:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:23.267344 | orchestrator | 2026-01-01 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:26.320816 | orchestrator | 2026-01-01 03:09:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:26.325256 | orchestrator | 2026-01-01 03:09:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:26.325333 | orchestrator | 2026-01-01 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:29.377702 | orchestrator | 2026-01-01 03:09:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:29.379368 | orchestrator | 2026-01-01 03:09:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:29.379404 | orchestrator | 2026-01-01 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:32.443077 | orchestrator | 2026-01-01 03:09:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:32.445980 | orchestrator | 2026-01-01 03:09:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:32.446092 | orchestrator | 2026-01-01 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:35.496117 | orchestrator | 2026-01-01 03:09:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:35.497550 | orchestrator | 2026-01-01 03:09:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:35.497637 | orchestrator | 2026-01-01 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:38.548772 | orchestrator | 2026-01-01 03:09:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:38.550476 | orchestrator | 2026-01-01 03:09:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:38.550651 | orchestrator | 2026-01-01 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:41.601969 | orchestrator | 2026-01-01 03:09:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:41.603116 | orchestrator | 2026-01-01 03:09:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:41.603169 | orchestrator | 2026-01-01 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:44.654357 | orchestrator | 2026-01-01 03:09:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:44.656184 | orchestrator | 2026-01-01 03:09:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:44.656262 | orchestrator | 2026-01-01 03:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:47.707649 | orchestrator | 2026-01-01 03:09:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:47.709403 | orchestrator | 2026-01-01 03:09:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:47.709489 | orchestrator | 2026-01-01 03:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:50.768369 | orchestrator | 2026-01-01 03:09:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:50.769091 | orchestrator | 2026-01-01 03:09:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:50.769206 | orchestrator | 2026-01-01 03:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:53.830919 | orchestrator | 2026-01-01 03:09:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:53.831552 | orchestrator | 2026-01-01 03:09:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:53.831945 | orchestrator | 2026-01-01 03:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:56.875533 | orchestrator | 2026-01-01 03:09:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:56.876118 | orchestrator | 2026-01-01 03:09:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:56.876171 | orchestrator | 2026-01-01 03:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:09:59.930228 | orchestrator | 2026-01-01 03:09:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:09:59.930980 | orchestrator | 2026-01-01 03:09:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:09:59.931120 | orchestrator | 2026-01-01 03:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:02.977822 | orchestrator | 2026-01-01 03:10:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:02.981221 | orchestrator | 2026-01-01 03:10:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:02.981306 | orchestrator | 2026-01-01 03:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:06.038511 | orchestrator | 2026-01-01 03:10:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:06.043674 | orchestrator | 2026-01-01 03:10:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:06.043756 | orchestrator | 2026-01-01 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:09.080769 | orchestrator | 2026-01-01 03:10:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:09.083138 | orchestrator | 2026-01-01 03:10:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:09.083196 | orchestrator | 2026-01-01 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:12.131679 | orchestrator | 2026-01-01 03:10:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:12.134178 | orchestrator | 2026-01-01 03:10:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:12.134222 | orchestrator | 2026-01-01 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:15.176932 | orchestrator | 2026-01-01 03:10:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:15.178908 | orchestrator | 2026-01-01 03:10:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:15.179002 | orchestrator | 2026-01-01 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:18.218771 | orchestrator | 2026-01-01 03:10:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:18.220728 | orchestrator | 2026-01-01 03:10:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:18.220817 | orchestrator | 2026-01-01 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:21.265316 | orchestrator | 2026-01-01 03:10:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:21.266759 | orchestrator | 2026-01-01 03:10:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:21.266831 | orchestrator | 2026-01-01 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:24.310446 | orchestrator | 2026-01-01 03:10:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:24.311267 | orchestrator | 2026-01-01 03:10:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:24.311308 | orchestrator | 2026-01-01 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:27.362387 | orchestrator | 2026-01-01 03:10:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:27.364964 | orchestrator | 2026-01-01 03:10:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:27.365002 | orchestrator | 2026-01-01 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:30.426874 | orchestrator | 2026-01-01 03:10:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:30.431765 | orchestrator | 2026-01-01 03:10:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:30.431834 | orchestrator | 2026-01-01 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:33.486288 | orchestrator | 2026-01-01 03:10:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:33.487799 | orchestrator | 2026-01-01 03:10:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:33.487838 | orchestrator | 2026-01-01 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:36.533034 | orchestrator | 2026-01-01 03:10:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:36.536789 | orchestrator | 2026-01-01 03:10:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:36.536837 | orchestrator | 2026-01-01 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:39.591668 | orchestrator | 2026-01-01 03:10:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:39.593036 | orchestrator | 2026-01-01 03:10:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:39.593068 | orchestrator | 2026-01-01 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:42.648325 | orchestrator | 2026-01-01 03:10:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:42.648414 | orchestrator | 2026-01-01 03:10:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:42.648428 | orchestrator | 2026-01-01 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:45.697519 | orchestrator | 2026-01-01 03:10:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:45.698537 | orchestrator | 2026-01-01 03:10:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:45.698567 | orchestrator | 2026-01-01 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:48.746147 | orchestrator | 2026-01-01 03:10:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:48.748054 | orchestrator | 2026-01-01 03:10:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:48.748122 | orchestrator | 2026-01-01 03:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:51.794117 | orchestrator | 2026-01-01 03:10:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:51.796725 | orchestrator | 2026-01-01 03:10:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:51.796782 | orchestrator | 2026-01-01 03:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:54.843153 | orchestrator | 2026-01-01 03:10:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:54.844420 | orchestrator | 2026-01-01 03:10:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:54.844476 | orchestrator | 2026-01-01 03:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:10:57.898911 | orchestrator | 2026-01-01 03:10:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:10:57.900157 | orchestrator | 2026-01-01 03:10:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:10:57.900202 | orchestrator | 2026-01-01 03:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:00.951857 | orchestrator | 2026-01-01 03:11:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:00.952790 | orchestrator | 2026-01-01 03:11:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:00.952835 | orchestrator | 2026-01-01 03:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:04.002228 | orchestrator | 2026-01-01 03:11:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:04.004982 | orchestrator | 2026-01-01 03:11:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:04.005091 | orchestrator | 2026-01-01 03:11:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:07.061813 | orchestrator | 2026-01-01 03:11:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:07.064206 | orchestrator | 2026-01-01 03:11:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:07.064297 | orchestrator | 2026-01-01 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:10.110780 | orchestrator | 2026-01-01 03:11:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:10.112172 | orchestrator | 2026-01-01 03:11:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:10.112194 | orchestrator | 2026-01-01 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:13.162652 | orchestrator | 2026-01-01 03:11:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:13.164266 | orchestrator | 2026-01-01 03:11:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:13.164312 | orchestrator | 2026-01-01 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:16.215428 | orchestrator | 2026-01-01 03:11:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:16.217146 | orchestrator | 2026-01-01 03:11:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:16.217223 | orchestrator | 2026-01-01 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:19.265929 | orchestrator | 2026-01-01 03:11:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:19.267315 | orchestrator | 2026-01-01 03:11:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:19.267394 | orchestrator | 2026-01-01 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:22.316654 | orchestrator | 2026-01-01 03:11:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:22.316730 | orchestrator | 2026-01-01 03:11:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:22.316750 | orchestrator | 2026-01-01 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:25.354327 | orchestrator | 2026-01-01 03:11:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:25.355792 | orchestrator | 2026-01-01 03:11:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:25.355827 | orchestrator | 2026-01-01 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:28.399711 | orchestrator | 2026-01-01 03:11:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:28.400431 | orchestrator | 2026-01-01 03:11:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:28.400486 | orchestrator | 2026-01-01 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:31.433180 | orchestrator | 2026-01-01 03:11:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:31.434430 | orchestrator | 2026-01-01 03:11:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:31.434481 | orchestrator | 2026-01-01 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:34.488341 | orchestrator | 2026-01-01 03:11:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:34.489795 | orchestrator | 2026-01-01 03:11:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:34.489857 | orchestrator | 2026-01-01 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:37.536480 | orchestrator | 2026-01-01 03:11:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:37.536780 | orchestrator | 2026-01-01 03:11:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:37.536805 | orchestrator | 2026-01-01 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:40.586276 | orchestrator | 2026-01-01 03:11:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:40.588412 | orchestrator | 2026-01-01 03:11:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:40.588474 | orchestrator | 2026-01-01 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:43.645009 | orchestrator | 2026-01-01 03:11:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:43.646970 | orchestrator | 2026-01-01 03:11:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:43.647359 | orchestrator | 2026-01-01 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:46.703256 | orchestrator | 2026-01-01 03:11:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:46.705927 | orchestrator | 2026-01-01 03:11:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:46.705980 | orchestrator | 2026-01-01 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:49.760743 | orchestrator | 2026-01-01 03:11:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:49.764169 | orchestrator | 2026-01-01 03:11:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:49.764286 | orchestrator | 2026-01-01 03:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:52.819171 | orchestrator | 2026-01-01 03:11:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:52.821173 | orchestrator | 2026-01-01 03:11:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:52.821256 | orchestrator | 2026-01-01 03:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:55.877424 | orchestrator | 2026-01-01 03:11:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:55.878348 | orchestrator | 2026-01-01 03:11:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:55.878383 | orchestrator | 2026-01-01 03:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:11:58.927667 | orchestrator | 2026-01-01 03:11:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:11:58.927969 | orchestrator | 2026-01-01 03:11:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:11:58.927994 | orchestrator | 2026-01-01 03:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:01.989547 | orchestrator | 2026-01-01 03:12:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:01.992433 | orchestrator | 2026-01-01 03:12:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:01.992510 | orchestrator | 2026-01-01 03:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:05.040947 | orchestrator | 2026-01-01 03:12:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:05.042952 | orchestrator | 2026-01-01 03:12:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:05.043080 | orchestrator | 2026-01-01 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:08.099365 | orchestrator | 2026-01-01 03:12:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:08.102812 | orchestrator | 2026-01-01 03:12:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:08.102962 | orchestrator | 2026-01-01 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:11.150159 | orchestrator | 2026-01-01 03:12:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:11.152083 | orchestrator | 2026-01-01 03:12:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:11.152166 | orchestrator | 2026-01-01 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:14.201438 | orchestrator | 2026-01-01 03:12:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:14.201792 | orchestrator | 2026-01-01 03:12:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:14.201830 | orchestrator | 2026-01-01 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:17.241412 | orchestrator | 2026-01-01 03:12:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:17.243398 | orchestrator | 2026-01-01 03:12:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:17.243455 | orchestrator | 2026-01-01 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:20.288868 | orchestrator | 2026-01-01 03:12:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:20.289966 | orchestrator | 2026-01-01 03:12:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:20.290075 | orchestrator | 2026-01-01 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:23.343057 | orchestrator | 2026-01-01 03:12:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:23.344931 | orchestrator | 2026-01-01 03:12:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:23.344973 | orchestrator | 2026-01-01 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:26.403437 | orchestrator | 2026-01-01 03:12:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:26.406173 | orchestrator | 2026-01-01 03:12:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:26.406286 | orchestrator | 2026-01-01 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:29.450865 | orchestrator | 2026-01-01 03:12:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:29.451326 | orchestrator | 2026-01-01 03:12:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:29.451370 | orchestrator | 2026-01-01 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:32.499976 | orchestrator | 2026-01-01 03:12:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:32.500892 | orchestrator | 2026-01-01 03:12:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:32.500937 | orchestrator | 2026-01-01 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:35.555466 | orchestrator | 2026-01-01 03:12:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:35.556668 | orchestrator | 2026-01-01 03:12:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:35.556691 | orchestrator | 2026-01-01 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:38.603330 | orchestrator | 2026-01-01 03:12:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:38.604331 | orchestrator | 2026-01-01 03:12:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:38.604352 | orchestrator | 2026-01-01 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:41.642705 | orchestrator | 2026-01-01 03:12:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:41.644407 | orchestrator | 2026-01-01 03:12:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:41.644457 | orchestrator | 2026-01-01 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:44.688439 | orchestrator | 2026-01-01 03:12:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:44.694247 | orchestrator | 2026-01-01 03:12:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:44.694317 | orchestrator | 2026-01-01 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:47.741410 | orchestrator | 2026-01-01 03:12:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:47.742418 | orchestrator | 2026-01-01 03:12:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:47.742451 | orchestrator | 2026-01-01 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:50.808056 | orchestrator | 2026-01-01 03:12:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:50.810271 | orchestrator | 2026-01-01 03:12:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:50.811196 | orchestrator | 2026-01-01 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:53.861018 | orchestrator | 2026-01-01 03:12:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:53.863569 | orchestrator | 2026-01-01 03:12:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:53.863643 | orchestrator | 2026-01-01 03:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:56.910892 | orchestrator | 2026-01-01 03:12:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:56.911154 | orchestrator | 2026-01-01 03:12:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:56.911182 | orchestrator | 2026-01-01 03:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:12:59.955140 | orchestrator | 2026-01-01 03:12:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:12:59.956331 | orchestrator | 2026-01-01 03:12:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:12:59.956365 | orchestrator | 2026-01-01 03:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:03.015953 | orchestrator | 2026-01-01 03:13:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:03.018758 | orchestrator | 2026-01-01 03:13:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:03.018945 | orchestrator | 2026-01-01 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:06.069796 | orchestrator | 2026-01-01 03:13:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:06.071301 | orchestrator | 2026-01-01 03:13:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:06.071744 | orchestrator | 2026-01-01 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:09.120033 | orchestrator | 2026-01-01 03:13:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:09.122176 | orchestrator | 2026-01-01 03:13:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:09.122230 | orchestrator | 2026-01-01 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:12.174136 | orchestrator | 2026-01-01 03:13:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:12.176523 | orchestrator | 2026-01-01 03:13:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:12.176666 | orchestrator | 2026-01-01 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:15.225860 | orchestrator | 2026-01-01 03:13:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:15.228704 | orchestrator | 2026-01-01 03:13:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:15.228759 | orchestrator | 2026-01-01 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:18.263243 | orchestrator | 2026-01-01 03:13:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:18.263519 | orchestrator | 2026-01-01 03:13:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:18.263550 | orchestrator | 2026-01-01 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:21.304677 | orchestrator | 2026-01-01 03:13:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:21.305371 | orchestrator | 2026-01-01 03:13:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:21.305406 | orchestrator | 2026-01-01 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:24.342943 | orchestrator | 2026-01-01 03:13:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:24.343782 | orchestrator | 2026-01-01 03:13:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:24.343820 | orchestrator | 2026-01-01 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:27.391046 | orchestrator | 2026-01-01 03:13:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:27.393882 | orchestrator | 2026-01-01 03:13:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:27.393953 | orchestrator | 2026-01-01 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:30.444499 | orchestrator | 2026-01-01 03:13:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:30.446202 | orchestrator | 2026-01-01 03:13:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:30.446242 | orchestrator | 2026-01-01 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:33.495262 | orchestrator | 2026-01-01 03:13:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:33.496983 | orchestrator | 2026-01-01 03:13:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:33.497760 | orchestrator | 2026-01-01 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:36.555048 | orchestrator | 2026-01-01 03:13:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:36.557364 | orchestrator | 2026-01-01 03:13:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:36.557414 | orchestrator | 2026-01-01 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:39.607178 | orchestrator | 2026-01-01 03:13:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:39.608433 | orchestrator | 2026-01-01 03:13:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:39.608493 | orchestrator | 2026-01-01 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:42.651269 | orchestrator | 2026-01-01 03:13:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:42.651497 | orchestrator | 2026-01-01 03:13:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:42.651778 | orchestrator | 2026-01-01 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:45.699542 | orchestrator | 2026-01-01 03:13:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:45.701966 | orchestrator | 2026-01-01 03:13:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:45.702057 | orchestrator | 2026-01-01 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:48.753277 | orchestrator | 2026-01-01 03:13:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:48.755520 | orchestrator | 2026-01-01 03:13:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:48.755729 | orchestrator | 2026-01-01 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:51.810326 | orchestrator | 2026-01-01 03:13:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:51.812226 | orchestrator | 2026-01-01 03:13:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:51.812309 | orchestrator | 2026-01-01 03:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:54.858519 | orchestrator | 2026-01-01 03:13:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:54.860084 | orchestrator | 2026-01-01 03:13:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:54.860125 | orchestrator | 2026-01-01 03:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:13:57.908847 | orchestrator | 2026-01-01 03:13:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:13:57.910337 | orchestrator | 2026-01-01 03:13:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:13:57.910369 | orchestrator | 2026-01-01 03:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:00.955855 | orchestrator | 2026-01-01 03:14:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:00.957713 | orchestrator | 2026-01-01 03:14:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:00.957771 | orchestrator | 2026-01-01 03:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:04.007047 | orchestrator | 2026-01-01 03:14:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:04.008120 | orchestrator | 2026-01-01 03:14:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:04.008161 | orchestrator | 2026-01-01 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:07.059102 | orchestrator | 2026-01-01 03:14:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:07.060146 | orchestrator | 2026-01-01 03:14:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:07.060183 | orchestrator | 2026-01-01 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:10.116088 | orchestrator | 2026-01-01 03:14:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:10.116936 | orchestrator | 2026-01-01 03:14:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:10.116976 | orchestrator | 2026-01-01 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:13.169181 | orchestrator | 2026-01-01 03:14:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:13.170991 | orchestrator | 2026-01-01 03:14:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:13.171032 | orchestrator | 2026-01-01 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:16.215886 | orchestrator | 2026-01-01 03:14:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:16.218099 | orchestrator | 2026-01-01 03:14:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:16.218148 | orchestrator | 2026-01-01 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:19.272092 | orchestrator | 2026-01-01 03:14:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:19.273809 | orchestrator | 2026-01-01 03:14:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:19.273916 | orchestrator | 2026-01-01 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:22.324596 | orchestrator | 2026-01-01 03:14:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:22.326237 | orchestrator | 2026-01-01 03:14:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:22.326304 | orchestrator | 2026-01-01 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:25.369450 | orchestrator | 2026-01-01 03:14:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:25.371657 | orchestrator | 2026-01-01 03:14:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:25.371731 | orchestrator | 2026-01-01 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:28.424744 | orchestrator | 2026-01-01 03:14:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:28.426515 | orchestrator | 2026-01-01 03:14:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:28.426758 | orchestrator | 2026-01-01 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:31.476583 | orchestrator | 2026-01-01 03:14:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:31.478648 | orchestrator | 2026-01-01 03:14:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:31.478737 | orchestrator | 2026-01-01 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:34.525660 | orchestrator | 2026-01-01 03:14:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:34.526700 | orchestrator | 2026-01-01 03:14:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:34.526773 | orchestrator | 2026-01-01 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:37.571356 | orchestrator | 2026-01-01 03:14:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:37.572810 | orchestrator | 2026-01-01 03:14:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:37.572838 | orchestrator | 2026-01-01 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:40.623079 | orchestrator | 2026-01-01 03:14:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:40.624560 | orchestrator | 2026-01-01 03:14:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:40.624696 | orchestrator | 2026-01-01 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:43.675102 | orchestrator | 2026-01-01 03:14:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:43.677469 | orchestrator | 2026-01-01 03:14:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:43.677564 | orchestrator | 2026-01-01 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:46.729065 | orchestrator | 2026-01-01 03:14:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:46.730649 | orchestrator | 2026-01-01 03:14:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:46.730686 | orchestrator | 2026-01-01 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:49.779572 | orchestrator | 2026-01-01 03:14:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:49.780493 | orchestrator | 2026-01-01 03:14:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:49.780567 | orchestrator | 2026-01-01 03:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:52.831634 | orchestrator | 2026-01-01 03:14:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:52.833474 | orchestrator | 2026-01-01 03:14:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:52.833541 | orchestrator | 2026-01-01 03:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:55.888943 | orchestrator | 2026-01-01 03:14:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:55.891819 | orchestrator | 2026-01-01 03:14:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:55.891967 | orchestrator | 2026-01-01 03:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:14:58.939060 | orchestrator | 2026-01-01 03:14:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:14:58.941376 | orchestrator | 2026-01-01 03:14:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:14:58.941429 | orchestrator | 2026-01-01 03:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:01.993247 | orchestrator | 2026-01-01 03:15:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:01.994076 | orchestrator | 2026-01-01 03:15:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:01.994162 | orchestrator | 2026-01-01 03:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:05.037730 | orchestrator | 2026-01-01 03:15:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:05.038591 | orchestrator | 2026-01-01 03:15:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:05.038645 | orchestrator | 2026-01-01 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:08.075854 | orchestrator | 2026-01-01 03:15:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:08.077382 | orchestrator | 2026-01-01 03:15:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:08.077406 | orchestrator | 2026-01-01 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:11.124929 | orchestrator | 2026-01-01 03:15:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:11.126472 | orchestrator | 2026-01-01 03:15:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:11.126527 | orchestrator | 2026-01-01 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:14.175364 | orchestrator | 2026-01-01 03:15:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:14.178770 | orchestrator | 2026-01-01 03:15:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:14.178910 | orchestrator | 2026-01-01 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:17.223920 | orchestrator | 2026-01-01 03:15:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:17.225784 | orchestrator | 2026-01-01 03:15:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:17.225826 | orchestrator | 2026-01-01 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:20.280204 | orchestrator | 2026-01-01 03:15:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:20.281650 | orchestrator | 2026-01-01 03:15:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:20.281703 | orchestrator | 2026-01-01 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:23.322803 | orchestrator | 2026-01-01 03:15:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:23.323409 | orchestrator | 2026-01-01 03:15:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:23.323443 | orchestrator | 2026-01-01 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:26.373332 | orchestrator | 2026-01-01 03:15:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:26.374962 | orchestrator | 2026-01-01 03:15:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:26.375221 | orchestrator | 2026-01-01 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:29.425564 | orchestrator | 2026-01-01 03:15:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:29.427007 | orchestrator | 2026-01-01 03:15:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:29.427081 | orchestrator | 2026-01-01 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:32.480593 | orchestrator | 2026-01-01 03:15:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:32.481853 | orchestrator | 2026-01-01 03:15:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:32.481883 | orchestrator | 2026-01-01 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:35.533025 | orchestrator | 2026-01-01 03:15:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:35.536039 | orchestrator | 2026-01-01 03:15:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:35.536076 | orchestrator | 2026-01-01 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:38.589420 | orchestrator | 2026-01-01 03:15:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:38.590704 | orchestrator | 2026-01-01 03:15:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:38.590757 | orchestrator | 2026-01-01 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:41.641483 | orchestrator | 2026-01-01 03:15:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:41.643860 | orchestrator | 2026-01-01 03:15:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:41.643902 | orchestrator | 2026-01-01 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:44.697111 | orchestrator | 2026-01-01 03:15:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:44.698219 | orchestrator | 2026-01-01 03:15:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:44.698256 | orchestrator | 2026-01-01 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:47.741574 | orchestrator | 2026-01-01 03:15:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:47.742596 | orchestrator | 2026-01-01 03:15:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:47.742744 | orchestrator | 2026-01-01 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:50.793895 | orchestrator | 2026-01-01 03:15:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:50.795443 | orchestrator | 2026-01-01 03:15:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:50.795513 | orchestrator | 2026-01-01 03:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:53.844674 | orchestrator | 2026-01-01 03:15:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:53.845254 | orchestrator | 2026-01-01 03:15:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:53.845289 | orchestrator | 2026-01-01 03:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:56.895837 | orchestrator | 2026-01-01 03:15:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:56.898008 | orchestrator | 2026-01-01 03:15:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:56.898208 | orchestrator | 2026-01-01 03:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:15:59.949982 | orchestrator | 2026-01-01 03:15:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:15:59.952147 | orchestrator | 2026-01-01 03:15:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:15:59.952222 | orchestrator | 2026-01-01 03:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:03.007233 | orchestrator | 2026-01-01 03:16:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:03.009898 | orchestrator | 2026-01-01 03:16:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:03.010178 | orchestrator | 2026-01-01 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:06.056926 | orchestrator | 2026-01-01 03:16:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:06.057429 | orchestrator | 2026-01-01 03:16:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:06.057651 | orchestrator | 2026-01-01 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:09.103178 | orchestrator | 2026-01-01 03:16:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:09.104900 | orchestrator | 2026-01-01 03:16:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:09.105055 | orchestrator | 2026-01-01 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:12.157152 | orchestrator | 2026-01-01 03:16:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:12.160015 | orchestrator | 2026-01-01 03:16:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:12.160099 | orchestrator | 2026-01-01 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:15.210279 | orchestrator | 2026-01-01 03:16:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:15.211666 | orchestrator | 2026-01-01 03:16:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:15.211704 | orchestrator | 2026-01-01 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:18.255872 | orchestrator | 2026-01-01 03:16:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:18.257100 | orchestrator | 2026-01-01 03:16:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:18.257144 | orchestrator | 2026-01-01 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:21.299672 | orchestrator | 2026-01-01 03:16:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:21.300190 | orchestrator | 2026-01-01 03:16:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:21.300224 | orchestrator | 2026-01-01 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:24.351452 | orchestrator | 2026-01-01 03:16:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:24.352284 | orchestrator | 2026-01-01 03:16:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:24.352324 | orchestrator | 2026-01-01 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:27.402695 | orchestrator | 2026-01-01 03:16:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:27.404074 | orchestrator | 2026-01-01 03:16:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:27.404115 | orchestrator | 2026-01-01 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:30.446659 | orchestrator | 2026-01-01 03:16:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:30.449464 | orchestrator | 2026-01-01 03:16:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:30.449546 | orchestrator | 2026-01-01 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:33.495230 | orchestrator | 2026-01-01 03:16:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:33.496776 | orchestrator | 2026-01-01 03:16:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:33.496872 | orchestrator | 2026-01-01 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:36.546488 | orchestrator | 2026-01-01 03:16:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:36.548715 | orchestrator | 2026-01-01 03:16:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:36.548751 | orchestrator | 2026-01-01 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:39.595077 | orchestrator | 2026-01-01 03:16:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:39.597175 | orchestrator | 2026-01-01 03:16:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:39.597230 | orchestrator | 2026-01-01 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:42.648929 | orchestrator | 2026-01-01 03:16:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:42.651135 | orchestrator | 2026-01-01 03:16:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:42.651463 | orchestrator | 2026-01-01 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:45.699308 | orchestrator | 2026-01-01 03:16:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:45.701551 | orchestrator | 2026-01-01 03:16:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:45.701690 | orchestrator | 2026-01-01 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:48.759148 | orchestrator | 2026-01-01 03:16:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:48.762173 | orchestrator | 2026-01-01 03:16:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:48.762299 | orchestrator | 2026-01-01 03:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:51.809096 | orchestrator | 2026-01-01 03:16:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:51.811077 | orchestrator | 2026-01-01 03:16:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:51.811155 | orchestrator | 2026-01-01 03:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:54.851162 | orchestrator | 2026-01-01 03:16:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:54.852029 | orchestrator | 2026-01-01 03:16:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:54.852049 | orchestrator | 2026-01-01 03:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:16:57.905099 | orchestrator | 2026-01-01 03:16:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:16:57.906440 | orchestrator | 2026-01-01 03:16:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:16:57.906498 | orchestrator | 2026-01-01 03:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:00.953116 | orchestrator | 2026-01-01 03:17:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:00.954334 | orchestrator | 2026-01-01 03:17:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:00.954384 | orchestrator | 2026-01-01 03:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:04.002680 | orchestrator | 2026-01-01 03:17:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:04.004732 | orchestrator | 2026-01-01 03:17:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:04.004818 | orchestrator | 2026-01-01 03:17:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:07.059520 | orchestrator | 2026-01-01 03:17:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:07.060934 | orchestrator | 2026-01-01 03:17:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:07.061281 | orchestrator | 2026-01-01 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:10.107965 | orchestrator | 2026-01-01 03:17:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:10.110294 | orchestrator | 2026-01-01 03:17:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:10.110363 | orchestrator | 2026-01-01 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:13.162219 | orchestrator | 2026-01-01 03:17:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:13.163414 | orchestrator | 2026-01-01 03:17:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:13.163442 | orchestrator | 2026-01-01 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:16.205730 | orchestrator | 2026-01-01 03:17:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:16.206957 | orchestrator | 2026-01-01 03:17:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:16.207050 | orchestrator | 2026-01-01 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:19.256841 | orchestrator | 2026-01-01 03:17:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:19.257850 | orchestrator | 2026-01-01 03:17:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:19.257907 | orchestrator | 2026-01-01 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:22.308214 | orchestrator | 2026-01-01 03:17:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:22.309267 | orchestrator | 2026-01-01 03:17:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:22.309297 | orchestrator | 2026-01-01 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:25.353404 | orchestrator | 2026-01-01 03:17:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:25.354306 | orchestrator | 2026-01-01 03:17:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:25.354762 | orchestrator | 2026-01-01 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:28.399390 | orchestrator | 2026-01-01 03:17:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:28.401541 | orchestrator | 2026-01-01 03:17:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:28.401575 | orchestrator | 2026-01-01 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:31.452182 | orchestrator | 2026-01-01 03:17:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:31.454631 | orchestrator | 2026-01-01 03:17:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:31.454692 | orchestrator | 2026-01-01 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:34.504943 | orchestrator | 2026-01-01 03:17:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:34.506770 | orchestrator | 2026-01-01 03:17:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:34.506807 | orchestrator | 2026-01-01 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:37.553232 | orchestrator | 2026-01-01 03:17:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:37.553320 | orchestrator | 2026-01-01 03:17:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:37.553360 | orchestrator | 2026-01-01 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:40.594527 | orchestrator | 2026-01-01 03:17:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:40.595170 | orchestrator | 2026-01-01 03:17:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:40.595239 | orchestrator | 2026-01-01 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:43.649591 | orchestrator | 2026-01-01 03:17:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:43.652316 | orchestrator | 2026-01-01 03:17:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:43.652449 | orchestrator | 2026-01-01 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:46.690527 | orchestrator | 2026-01-01 03:17:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:46.692708 | orchestrator | 2026-01-01 03:17:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:46.692785 | orchestrator | 2026-01-01 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:49.745016 | orchestrator | 2026-01-01 03:17:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:49.747212 | orchestrator | 2026-01-01 03:17:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:49.747244 | orchestrator | 2026-01-01 03:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:52.800388 | orchestrator | 2026-01-01 03:17:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:52.803586 | orchestrator | 2026-01-01 03:17:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:52.803640 | orchestrator | 2026-01-01 03:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:55.854456 | orchestrator | 2026-01-01 03:17:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:55.856679 | orchestrator | 2026-01-01 03:17:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:55.856712 | orchestrator | 2026-01-01 03:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:17:58.916227 | orchestrator | 2026-01-01 03:17:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:17:58.918340 | orchestrator | 2026-01-01 03:17:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:17:58.918370 | orchestrator | 2026-01-01 03:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:01.967648 | orchestrator | 2026-01-01 03:18:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:01.968905 | orchestrator | 2026-01-01 03:18:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:01.968945 | orchestrator | 2026-01-01 03:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:05.016118 | orchestrator | 2026-01-01 03:18:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:05.019082 | orchestrator | 2026-01-01 03:18:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:05.019143 | orchestrator | 2026-01-01 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:08.063620 | orchestrator | 2026-01-01 03:18:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:08.064789 | orchestrator | 2026-01-01 03:18:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:08.064847 | orchestrator | 2026-01-01 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:11.113959 | orchestrator | 2026-01-01 03:18:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:11.115729 | orchestrator | 2026-01-01 03:18:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:11.115809 | orchestrator | 2026-01-01 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:14.162214 | orchestrator | 2026-01-01 03:18:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:14.164780 | orchestrator | 2026-01-01 03:18:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:14.164854 | orchestrator | 2026-01-01 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:17.213289 | orchestrator | 2026-01-01 03:18:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:17.214777 | orchestrator | 2026-01-01 03:18:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:17.214825 | orchestrator | 2026-01-01 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:20.255823 | orchestrator | 2026-01-01 03:18:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:20.258230 | orchestrator | 2026-01-01 03:18:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:20.258295 | orchestrator | 2026-01-01 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:23.305168 | orchestrator | 2026-01-01 03:18:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:23.306899 | orchestrator | 2026-01-01 03:18:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:23.306958 | orchestrator | 2026-01-01 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:26.354643 | orchestrator | 2026-01-01 03:18:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:26.355517 | orchestrator | 2026-01-01 03:18:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:26.355544 | orchestrator | 2026-01-01 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:29.411559 | orchestrator | 2026-01-01 03:18:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:29.413256 | orchestrator | 2026-01-01 03:18:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:29.413299 | orchestrator | 2026-01-01 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:32.465074 | orchestrator | 2026-01-01 03:18:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:32.467120 | orchestrator | 2026-01-01 03:18:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:32.467162 | orchestrator | 2026-01-01 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:35.516364 | orchestrator | 2026-01-01 03:18:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:35.517724 | orchestrator | 2026-01-01 03:18:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:35.517762 | orchestrator | 2026-01-01 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:38.565328 | orchestrator | 2026-01-01 03:18:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:38.567585 | orchestrator | 2026-01-01 03:18:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:38.567622 | orchestrator | 2026-01-01 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:41.617242 | orchestrator | 2026-01-01 03:18:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:41.618897 | orchestrator | 2026-01-01 03:18:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:41.618962 | orchestrator | 2026-01-01 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:44.672286 | orchestrator | 2026-01-01 03:18:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:44.673531 | orchestrator | 2026-01-01 03:18:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:44.673596 | orchestrator | 2026-01-01 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:47.716007 | orchestrator | 2026-01-01 03:18:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:47.717246 | orchestrator | 2026-01-01 03:18:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:47.717330 | orchestrator | 2026-01-01 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:50.759474 | orchestrator | 2026-01-01 03:18:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:50.760056 | orchestrator | 2026-01-01 03:18:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:50.760266 | orchestrator | 2026-01-01 03:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:53.801087 | orchestrator | 2026-01-01 03:18:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:53.801273 | orchestrator | 2026-01-01 03:18:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:53.801293 | orchestrator | 2026-01-01 03:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:56.851652 | orchestrator | 2026-01-01 03:18:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:56.853651 | orchestrator | 2026-01-01 03:18:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:56.853724 | orchestrator | 2026-01-01 03:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:18:59.899658 | orchestrator | 2026-01-01 03:18:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:18:59.901531 | orchestrator | 2026-01-01 03:18:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:18:59.901581 | orchestrator | 2026-01-01 03:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:02.958200 | orchestrator | 2026-01-01 03:19:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:02.960337 | orchestrator | 2026-01-01 03:19:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:02.960471 | orchestrator | 2026-01-01 03:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:06.020224 | orchestrator | 2026-01-01 03:19:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:06.023362 | orchestrator | 2026-01-01 03:19:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:06.023447 | orchestrator | 2026-01-01 03:19:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:09.064945 | orchestrator | 2026-01-01 03:19:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:09.068551 | orchestrator | 2026-01-01 03:19:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:09.068648 | orchestrator | 2026-01-01 03:19:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:12.111034 | orchestrator | 2026-01-01 03:19:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:12.114174 | orchestrator | 2026-01-01 03:19:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:12.114243 | orchestrator | 2026-01-01 03:19:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:15.160379 | orchestrator | 2026-01-01 03:19:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:15.161682 | orchestrator | 2026-01-01 03:19:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:15.161920 | orchestrator | 2026-01-01 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:18.207265 | orchestrator | 2026-01-01 03:19:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:18.210421 | orchestrator | 2026-01-01 03:19:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:18.210509 | orchestrator | 2026-01-01 03:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:21.250681 | orchestrator | 2026-01-01 03:19:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:21.252811 | orchestrator | 2026-01-01 03:19:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:21.252885 | orchestrator | 2026-01-01 03:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:24.297324 | orchestrator | 2026-01-01 03:19:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:24.297950 | orchestrator | 2026-01-01 03:19:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:24.298062 | orchestrator | 2026-01-01 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:27.348856 | orchestrator | 2026-01-01 03:19:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:27.349536 | orchestrator | 2026-01-01 03:19:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:27.349563 | orchestrator | 2026-01-01 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:30.401956 | orchestrator | 2026-01-01 03:19:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:30.403699 | orchestrator | 2026-01-01 03:19:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:30.403791 | orchestrator | 2026-01-01 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:33.443421 | orchestrator | 2026-01-01 03:19:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:33.445964 | orchestrator | 2026-01-01 03:19:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:33.446010 | orchestrator | 2026-01-01 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:36.488563 | orchestrator | 2026-01-01 03:19:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:36.490532 | orchestrator | 2026-01-01 03:19:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:36.490754 | orchestrator | 2026-01-01 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:39.537881 | orchestrator | 2026-01-01 03:19:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:39.539484 | orchestrator | 2026-01-01 03:19:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:39.539627 | orchestrator | 2026-01-01 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:42.584334 | orchestrator | 2026-01-01 03:19:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:42.585025 | orchestrator | 2026-01-01 03:19:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:42.585065 | orchestrator | 2026-01-01 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:45.633027 | orchestrator | 2026-01-01 03:19:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:45.634326 | orchestrator | 2026-01-01 03:19:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:45.634378 | orchestrator | 2026-01-01 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:48.689336 | orchestrator | 2026-01-01 03:19:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:48.692999 | orchestrator | 2026-01-01 03:19:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:48.693087 | orchestrator | 2026-01-01 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:51.742510 | orchestrator | 2026-01-01 03:19:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:51.744978 | orchestrator | 2026-01-01 03:19:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:51.745024 | orchestrator | 2026-01-01 03:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:54.797630 | orchestrator | 2026-01-01 03:19:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:54.799645 | orchestrator | 2026-01-01 03:19:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:54.799856 | orchestrator | 2026-01-01 03:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:19:57.850197 | orchestrator | 2026-01-01 03:19:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:19:57.851894 | orchestrator | 2026-01-01 03:19:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:19:57.851922 | orchestrator | 2026-01-01 03:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:00.894836 | orchestrator | 2026-01-01 03:20:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:00.896904 | orchestrator | 2026-01-01 03:20:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:00.897034 | orchestrator | 2026-01-01 03:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:03.947616 | orchestrator | 2026-01-01 03:20:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:03.951051 | orchestrator | 2026-01-01 03:20:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:03.951112 | orchestrator | 2026-01-01 03:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:06.999911 | orchestrator | 2026-01-01 03:20:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:07.001249 | orchestrator | 2026-01-01 03:20:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:07.001317 | orchestrator | 2026-01-01 03:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:10.042296 | orchestrator | 2026-01-01 03:20:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:10.044187 | orchestrator | 2026-01-01 03:20:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:10.044265 | orchestrator | 2026-01-01 03:20:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:13.091037 | orchestrator | 2026-01-01 03:20:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:13.091952 | orchestrator | 2026-01-01 03:20:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:13.092032 | orchestrator | 2026-01-01 03:20:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:16.140046 | orchestrator | 2026-01-01 03:20:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:16.143017 | orchestrator | 2026-01-01 03:20:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:16.143093 | orchestrator | 2026-01-01 03:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:19.196951 | orchestrator | 2026-01-01 03:20:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:19.198217 | orchestrator | 2026-01-01 03:20:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:19.198393 | orchestrator | 2026-01-01 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:22.252291 | orchestrator | 2026-01-01 03:20:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:22.254708 | orchestrator | 2026-01-01 03:20:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:22.254746 | orchestrator | 2026-01-01 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:25.304507 | orchestrator | 2026-01-01 03:20:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:25.307110 | orchestrator | 2026-01-01 03:20:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:25.307177 | orchestrator | 2026-01-01 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:28.363061 | orchestrator | 2026-01-01 03:20:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:28.364442 | orchestrator | 2026-01-01 03:20:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:28.364503 | orchestrator | 2026-01-01 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:31.413583 | orchestrator | 2026-01-01 03:20:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:31.416230 | orchestrator | 2026-01-01 03:20:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:31.416288 | orchestrator | 2026-01-01 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:34.471036 | orchestrator | 2026-01-01 03:20:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:34.474123 | orchestrator | 2026-01-01 03:20:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:34.474202 | orchestrator | 2026-01-01 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:37.520292 | orchestrator | 2026-01-01 03:20:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:37.522179 | orchestrator | 2026-01-01 03:20:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:37.522223 | orchestrator | 2026-01-01 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:40.575243 | orchestrator | 2026-01-01 03:20:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:40.577919 | orchestrator | 2026-01-01 03:20:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:40.578069 | orchestrator | 2026-01-01 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:43.623169 | orchestrator | 2026-01-01 03:20:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:43.625150 | orchestrator | 2026-01-01 03:20:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:43.625223 | orchestrator | 2026-01-01 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:46.670129 | orchestrator | 2026-01-01 03:20:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:46.671739 | orchestrator | 2026-01-01 03:20:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:46.671808 | orchestrator | 2026-01-01 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:49.712629 | orchestrator | 2026-01-01 03:20:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:49.716903 | orchestrator | 2026-01-01 03:20:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:49.716964 | orchestrator | 2026-01-01 03:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:52.766319 | orchestrator | 2026-01-01 03:20:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:52.769322 | orchestrator | 2026-01-01 03:20:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:52.769457 | orchestrator | 2026-01-01 03:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:55.820079 | orchestrator | 2026-01-01 03:20:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:55.820516 | orchestrator | 2026-01-01 03:20:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:55.820549 | orchestrator | 2026-01-01 03:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:20:58.871136 | orchestrator | 2026-01-01 03:20:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:20:58.871982 | orchestrator | 2026-01-01 03:20:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:20:58.872010 | orchestrator | 2026-01-01 03:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:01.925501 | orchestrator | 2026-01-01 03:21:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:01.927939 | orchestrator | 2026-01-01 03:21:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:01.928513 | orchestrator | 2026-01-01 03:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:04.983421 | orchestrator | 2026-01-01 03:21:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:04.984192 | orchestrator | 2026-01-01 03:21:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:04.984272 | orchestrator | 2026-01-01 03:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:08.041282 | orchestrator | 2026-01-01 03:21:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:08.042770 | orchestrator | 2026-01-01 03:21:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:08.042892 | orchestrator | 2026-01-01 03:21:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:11.086608 | orchestrator | 2026-01-01 03:21:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:11.090370 | orchestrator | 2026-01-01 03:21:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:11.090466 | orchestrator | 2026-01-01 03:21:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:14.136763 | orchestrator | 2026-01-01 03:21:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:14.139384 | orchestrator | 2026-01-01 03:21:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:14.139411 | orchestrator | 2026-01-01 03:21:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:17.175032 | orchestrator | 2026-01-01 03:21:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:17.177573 | orchestrator | 2026-01-01 03:21:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:17.177653 | orchestrator | 2026-01-01 03:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:20.226349 | orchestrator | 2026-01-01 03:21:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:20.227619 | orchestrator | 2026-01-01 03:21:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:20.227696 | orchestrator | 2026-01-01 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:23.274179 | orchestrator | 2026-01-01 03:21:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:23.276484 | orchestrator | 2026-01-01 03:21:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:23.276567 | orchestrator | 2026-01-01 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:26.332434 | orchestrator | 2026-01-01 03:21:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:26.333359 | orchestrator | 2026-01-01 03:21:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:26.333406 | orchestrator | 2026-01-01 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:29.385524 | orchestrator | 2026-01-01 03:21:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:29.386732 | orchestrator | 2026-01-01 03:21:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:29.387315 | orchestrator | 2026-01-01 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:32.445943 | orchestrator | 2026-01-01 03:21:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:32.449076 | orchestrator | 2026-01-01 03:21:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:32.449118 | orchestrator | 2026-01-01 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:35.501551 | orchestrator | 2026-01-01 03:21:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:35.502738 | orchestrator | 2026-01-01 03:21:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:35.502770 | orchestrator | 2026-01-01 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:38.551557 | orchestrator | 2026-01-01 03:21:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:38.554801 | orchestrator | 2026-01-01 03:21:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:38.554907 | orchestrator | 2026-01-01 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:41.601327 | orchestrator | 2026-01-01 03:21:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:41.602915 | orchestrator | 2026-01-01 03:21:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:41.603031 | orchestrator | 2026-01-01 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:44.660598 | orchestrator | 2026-01-01 03:21:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:44.662157 | orchestrator | 2026-01-01 03:21:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:44.662243 | orchestrator | 2026-01-01 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:47.721533 | orchestrator | 2026-01-01 03:21:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:47.724346 | orchestrator | 2026-01-01 03:21:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:47.724403 | orchestrator | 2026-01-01 03:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:50.781050 | orchestrator | 2026-01-01 03:21:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:50.782081 | orchestrator | 2026-01-01 03:21:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:50.782224 | orchestrator | 2026-01-01 03:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:53.836023 | orchestrator | 2026-01-01 03:21:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:53.838701 | orchestrator | 2026-01-01 03:21:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:53.838784 | orchestrator | 2026-01-01 03:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:56.882753 | orchestrator | 2026-01-01 03:21:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:56.883728 | orchestrator | 2026-01-01 03:21:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:56.884338 | orchestrator | 2026-01-01 03:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:21:59.931932 | orchestrator | 2026-01-01 03:21:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:21:59.932771 | orchestrator | 2026-01-01 03:21:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:21:59.932812 | orchestrator | 2026-01-01 03:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:02.980243 | orchestrator | 2026-01-01 03:22:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:02.983150 | orchestrator | 2026-01-01 03:22:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:02.983254 | orchestrator | 2026-01-01 03:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:06.035115 | orchestrator | 2026-01-01 03:22:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:06.036804 | orchestrator | 2026-01-01 03:22:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:06.036867 | orchestrator | 2026-01-01 03:22:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:09.079030 | orchestrator | 2026-01-01 03:22:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:09.081809 | orchestrator | 2026-01-01 03:22:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:09.081899 | orchestrator | 2026-01-01 03:22:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:12.129228 | orchestrator | 2026-01-01 03:22:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:12.130153 | orchestrator | 2026-01-01 03:22:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:12.130219 | orchestrator | 2026-01-01 03:22:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:15.178332 | orchestrator | 2026-01-01 03:22:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:15.180026 | orchestrator | 2026-01-01 03:22:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:15.180067 | orchestrator | 2026-01-01 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:18.233052 | orchestrator | 2026-01-01 03:22:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:18.234942 | orchestrator | 2026-01-01 03:22:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:18.234993 | orchestrator | 2026-01-01 03:22:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:21.285416 | orchestrator | 2026-01-01 03:22:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:21.286306 | orchestrator | 2026-01-01 03:22:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:21.286383 | orchestrator | 2026-01-01 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:24.337005 | orchestrator | 2026-01-01 03:22:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:24.338527 | orchestrator | 2026-01-01 03:22:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:24.338576 | orchestrator | 2026-01-01 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:27.387065 | orchestrator | 2026-01-01 03:22:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:27.389127 | orchestrator | 2026-01-01 03:22:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:27.389173 | orchestrator | 2026-01-01 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:30.437076 | orchestrator | 2026-01-01 03:22:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:30.437926 | orchestrator | 2026-01-01 03:22:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:30.437957 | orchestrator | 2026-01-01 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:33.482374 | orchestrator | 2026-01-01 03:22:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:33.483722 | orchestrator | 2026-01-01 03:22:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:33.483830 | orchestrator | 2026-01-01 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:36.532478 | orchestrator | 2026-01-01 03:22:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:36.534166 | orchestrator | 2026-01-01 03:22:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:36.534231 | orchestrator | 2026-01-01 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:39.581044 | orchestrator | 2026-01-01 03:22:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:39.582395 | orchestrator | 2026-01-01 03:22:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:39.582446 | orchestrator | 2026-01-01 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:42.633799 | orchestrator | 2026-01-01 03:22:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:42.634926 | orchestrator | 2026-01-01 03:22:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:42.634986 | orchestrator | 2026-01-01 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:45.690601 | orchestrator | 2026-01-01 03:22:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:45.692589 | orchestrator | 2026-01-01 03:22:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:45.692660 | orchestrator | 2026-01-01 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:48.744709 | orchestrator | 2026-01-01 03:22:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:48.745916 | orchestrator | 2026-01-01 03:22:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:48.745965 | orchestrator | 2026-01-01 03:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:51.798012 | orchestrator | 2026-01-01 03:22:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:51.800984 | orchestrator | 2026-01-01 03:22:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:51.801043 | orchestrator | 2026-01-01 03:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:54.851004 | orchestrator | 2026-01-01 03:22:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:54.852304 | orchestrator | 2026-01-01 03:22:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:54.852363 | orchestrator | 2026-01-01 03:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:22:57.908124 | orchestrator | 2026-01-01 03:22:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:22:57.910442 | orchestrator | 2026-01-01 03:22:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:22:57.910500 | orchestrator | 2026-01-01 03:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:00.962661 | orchestrator | 2026-01-01 03:23:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:00.964499 | orchestrator | 2026-01-01 03:23:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:00.964575 | orchestrator | 2026-01-01 03:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:04.017507 | orchestrator | 2026-01-01 03:23:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:04.019521 | orchestrator | 2026-01-01 03:23:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:04.019582 | orchestrator | 2026-01-01 03:23:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:07.075467 | orchestrator | 2026-01-01 03:23:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:07.077160 | orchestrator | 2026-01-01 03:23:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:07.077245 | orchestrator | 2026-01-01 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:10.115155 | orchestrator | 2026-01-01 03:23:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:10.115848 | orchestrator | 2026-01-01 03:23:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:10.115944 | orchestrator | 2026-01-01 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:13.159093 | orchestrator | 2026-01-01 03:23:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:13.159393 | orchestrator | 2026-01-01 03:23:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:13.159420 | orchestrator | 2026-01-01 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:16.211975 | orchestrator | 2026-01-01 03:23:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:16.213613 | orchestrator | 2026-01-01 03:23:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:16.213650 | orchestrator | 2026-01-01 03:23:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:19.264282 | orchestrator | 2026-01-01 03:23:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:19.265233 | orchestrator | 2026-01-01 03:23:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:19.265318 | orchestrator | 2026-01-01 03:23:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:22.309964 | orchestrator | 2026-01-01 03:23:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:22.312324 | orchestrator | 2026-01-01 03:23:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:22.312426 | orchestrator | 2026-01-01 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:25.360941 | orchestrator | 2026-01-01 03:23:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:25.363126 | orchestrator | 2026-01-01 03:23:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:25.363181 | orchestrator | 2026-01-01 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:28.411854 | orchestrator | 2026-01-01 03:23:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:28.413139 | orchestrator | 2026-01-01 03:23:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:28.413168 | orchestrator | 2026-01-01 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:31.466214 | orchestrator | 2026-01-01 03:23:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:31.468535 | orchestrator | 2026-01-01 03:23:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:31.468616 | orchestrator | 2026-01-01 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:34.516990 | orchestrator | 2026-01-01 03:23:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:34.518662 | orchestrator | 2026-01-01 03:23:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:34.518732 | orchestrator | 2026-01-01 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:37.559369 | orchestrator | 2026-01-01 03:23:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:37.561505 | orchestrator | 2026-01-01 03:23:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:37.561533 | orchestrator | 2026-01-01 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:40.605668 | orchestrator | 2026-01-01 03:23:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:40.607445 | orchestrator | 2026-01-01 03:23:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:40.607499 | orchestrator | 2026-01-01 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:43.662273 | orchestrator | 2026-01-01 03:23:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:43.664239 | orchestrator | 2026-01-01 03:23:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:43.664275 | orchestrator | 2026-01-01 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:46.712645 | orchestrator | 2026-01-01 03:23:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:46.715422 | orchestrator | 2026-01-01 03:23:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:46.715491 | orchestrator | 2026-01-01 03:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:49.761678 | orchestrator | 2026-01-01 03:23:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:49.763431 | orchestrator | 2026-01-01 03:23:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:49.763466 | orchestrator | 2026-01-01 03:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:52.814006 | orchestrator | 2026-01-01 03:23:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:52.814970 | orchestrator | 2026-01-01 03:23:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:52.815173 | orchestrator | 2026-01-01 03:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:55.862531 | orchestrator | 2026-01-01 03:23:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:55.862694 | orchestrator | 2026-01-01 03:23:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:55.863502 | orchestrator | 2026-01-01 03:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:23:58.907106 | orchestrator | 2026-01-01 03:23:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:23:58.908591 | orchestrator | 2026-01-01 03:23:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:23:58.908637 | orchestrator | 2026-01-01 03:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:01.964284 | orchestrator | 2026-01-01 03:24:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:01.967000 | orchestrator | 2026-01-01 03:24:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:01.967119 | orchestrator | 2026-01-01 03:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:05.024663 | orchestrator | 2026-01-01 03:24:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:05.026776 | orchestrator | 2026-01-01 03:24:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:05.026872 | orchestrator | 2026-01-01 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:08.080367 | orchestrator | 2026-01-01 03:24:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:08.080481 | orchestrator | 2026-01-01 03:24:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:08.080500 | orchestrator | 2026-01-01 03:24:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:11.125241 | orchestrator | 2026-01-01 03:24:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:11.128527 | orchestrator | 2026-01-01 03:24:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:11.128595 | orchestrator | 2026-01-01 03:24:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:14.167191 | orchestrator | 2026-01-01 03:24:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:14.167407 | orchestrator | 2026-01-01 03:24:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:14.167434 | orchestrator | 2026-01-01 03:24:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:17.222569 | orchestrator | 2026-01-01 03:24:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:17.225073 | orchestrator | 2026-01-01 03:24:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:17.225128 | orchestrator | 2026-01-01 03:24:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:20.268552 | orchestrator | 2026-01-01 03:24:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:20.271407 | orchestrator | 2026-01-01 03:24:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:20.271458 | orchestrator | 2026-01-01 03:24:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:23.322010 | orchestrator | 2026-01-01 03:24:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:23.325003 | orchestrator | 2026-01-01 03:24:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:23.325036 | orchestrator | 2026-01-01 03:24:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:26.379665 | orchestrator | 2026-01-01 03:24:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:26.381527 | orchestrator | 2026-01-01 03:24:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:26.381567 | orchestrator | 2026-01-01 03:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:29.426107 | orchestrator | 2026-01-01 03:24:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:29.426957 | orchestrator | 2026-01-01 03:24:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:29.427147 | orchestrator | 2026-01-01 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:32.467821 | orchestrator | 2026-01-01 03:24:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:32.470471 | orchestrator | 2026-01-01 03:24:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:32.470538 | orchestrator | 2026-01-01 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:35.519570 | orchestrator | 2026-01-01 03:24:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:35.520367 | orchestrator | 2026-01-01 03:24:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:35.520415 | orchestrator | 2026-01-01 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:38.565335 | orchestrator | 2026-01-01 03:24:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:38.566523 | orchestrator | 2026-01-01 03:24:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:38.566570 | orchestrator | 2026-01-01 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:41.607518 | orchestrator | 2026-01-01 03:24:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:41.609181 | orchestrator | 2026-01-01 03:24:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:41.609235 | orchestrator | 2026-01-01 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:44.658537 | orchestrator | 2026-01-01 03:24:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:44.659405 | orchestrator | 2026-01-01 03:24:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:44.659435 | orchestrator | 2026-01-01 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:47.702513 | orchestrator | 2026-01-01 03:24:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:47.704380 | orchestrator | 2026-01-01 03:24:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:47.704435 | orchestrator | 2026-01-01 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:50.759025 | orchestrator | 2026-01-01 03:24:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:50.762192 | orchestrator | 2026-01-01 03:24:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:50.762248 | orchestrator | 2026-01-01 03:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:53.808838 | orchestrator | 2026-01-01 03:24:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:53.812347 | orchestrator | 2026-01-01 03:24:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:53.812392 | orchestrator | 2026-01-01 03:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:56.863731 | orchestrator | 2026-01-01 03:24:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:56.865868 | orchestrator | 2026-01-01 03:24:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:56.866084 | orchestrator | 2026-01-01 03:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:24:59.924067 | orchestrator | 2026-01-01 03:24:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:24:59.926284 | orchestrator | 2026-01-01 03:24:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:24:59.926341 | orchestrator | 2026-01-01 03:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:02.980838 | orchestrator | 2026-01-01 03:25:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:02.982271 | orchestrator | 2026-01-01 03:25:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:02.982319 | orchestrator | 2026-01-01 03:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:06.046218 | orchestrator | 2026-01-01 03:25:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:06.046326 | orchestrator | 2026-01-01 03:25:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:06.046342 | orchestrator | 2026-01-01 03:25:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:09.094186 | orchestrator | 2026-01-01 03:25:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:09.096769 | orchestrator | 2026-01-01 03:25:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:09.096818 | orchestrator | 2026-01-01 03:25:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:12.143491 | orchestrator | 2026-01-01 03:25:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:12.145640 | orchestrator | 2026-01-01 03:25:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:12.146323 | orchestrator | 2026-01-01 03:25:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:15.193263 | orchestrator | 2026-01-01 03:25:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:15.195238 | orchestrator | 2026-01-01 03:25:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:15.195281 | orchestrator | 2026-01-01 03:25:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:18.247709 | orchestrator | 2026-01-01 03:25:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:18.249654 | orchestrator | 2026-01-01 03:25:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:18.249702 | orchestrator | 2026-01-01 03:25:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:21.304354 | orchestrator | 2026-01-01 03:25:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:21.306345 | orchestrator | 2026-01-01 03:25:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:21.306396 | orchestrator | 2026-01-01 03:25:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:24.352805 | orchestrator | 2026-01-01 03:25:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:24.354502 | orchestrator | 2026-01-01 03:25:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:24.354534 | orchestrator | 2026-01-01 03:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:27.401152 | orchestrator | 2026-01-01 03:25:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:27.404316 | orchestrator | 2026-01-01 03:25:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:27.404367 | orchestrator | 2026-01-01 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:30.467903 | orchestrator | 2026-01-01 03:25:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:30.471033 | orchestrator | 2026-01-01 03:25:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:30.471127 | orchestrator | 2026-01-01 03:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:33.527242 | orchestrator | 2026-01-01 03:25:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:33.529667 | orchestrator | 2026-01-01 03:25:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:33.529736 | orchestrator | 2026-01-01 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:36.573109 | orchestrator | 2026-01-01 03:25:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:36.575154 | orchestrator | 2026-01-01 03:25:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:36.575255 | orchestrator | 2026-01-01 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:39.625640 | orchestrator | 2026-01-01 03:25:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:39.626211 | orchestrator | 2026-01-01 03:25:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:39.626262 | orchestrator | 2026-01-01 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:42.676562 | orchestrator | 2026-01-01 03:25:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:42.678323 | orchestrator | 2026-01-01 03:25:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:42.678371 | orchestrator | 2026-01-01 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:45.731524 | orchestrator | 2026-01-01 03:25:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:45.732946 | orchestrator | 2026-01-01 03:25:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:45.732961 | orchestrator | 2026-01-01 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:48.788919 | orchestrator | 2026-01-01 03:25:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:48.789436 | orchestrator | 2026-01-01 03:25:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:48.789898 | orchestrator | 2026-01-01 03:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:51.838112 | orchestrator | 2026-01-01 03:25:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:51.839157 | orchestrator | 2026-01-01 03:25:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:51.839241 | orchestrator | 2026-01-01 03:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:54.883315 | orchestrator | 2026-01-01 03:25:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:54.885246 | orchestrator | 2026-01-01 03:25:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:54.885280 | orchestrator | 2026-01-01 03:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:25:57.935953 | orchestrator | 2026-01-01 03:25:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:25:57.938844 | orchestrator | 2026-01-01 03:25:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:25:57.938896 | orchestrator | 2026-01-01 03:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:00.993328 | orchestrator | 2026-01-01 03:26:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:00.995270 | orchestrator | 2026-01-01 03:26:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:00.995359 | orchestrator | 2026-01-01 03:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:04.043204 | orchestrator | 2026-01-01 03:26:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:04.045095 | orchestrator | 2026-01-01 03:26:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:04.045135 | orchestrator | 2026-01-01 03:26:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:07.087738 | orchestrator | 2026-01-01 03:26:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:07.089104 | orchestrator | 2026-01-01 03:26:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:07.089285 | orchestrator | 2026-01-01 03:26:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:10.141113 | orchestrator | 2026-01-01 03:26:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:10.144613 | orchestrator | 2026-01-01 03:26:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:10.144695 | orchestrator | 2026-01-01 03:26:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:13.186398 | orchestrator | 2026-01-01 03:26:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:13.187663 | orchestrator | 2026-01-01 03:26:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:13.187696 | orchestrator | 2026-01-01 03:26:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:16.228269 | orchestrator | 2026-01-01 03:26:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:16.229723 | orchestrator | 2026-01-01 03:26:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:16.229810 | orchestrator | 2026-01-01 03:26:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:19.289609 | orchestrator | 2026-01-01 03:26:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:19.291263 | orchestrator | 2026-01-01 03:26:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:19.291297 | orchestrator | 2026-01-01 03:26:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:22.342334 | orchestrator | 2026-01-01 03:26:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:22.343288 | orchestrator | 2026-01-01 03:26:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:22.343319 | orchestrator | 2026-01-01 03:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:25.401484 | orchestrator | 2026-01-01 03:26:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:25.403169 | orchestrator | 2026-01-01 03:26:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:25.403242 | orchestrator | 2026-01-01 03:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:28.452927 | orchestrator | 2026-01-01 03:26:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:28.454795 | orchestrator | 2026-01-01 03:26:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:28.454868 | orchestrator | 2026-01-01 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:31.501351 | orchestrator | 2026-01-01 03:26:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:31.503939 | orchestrator | 2026-01-01 03:26:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:31.504009 | orchestrator | 2026-01-01 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:34.550629 | orchestrator | 2026-01-01 03:26:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:34.554942 | orchestrator | 2026-01-01 03:26:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:34.554998 | orchestrator | 2026-01-01 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:37.594652 | orchestrator | 2026-01-01 03:26:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:37.596217 | orchestrator | 2026-01-01 03:26:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:37.596238 | orchestrator | 2026-01-01 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:40.637170 | orchestrator | 2026-01-01 03:26:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:40.638129 | orchestrator | 2026-01-01 03:26:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:40.638169 | orchestrator | 2026-01-01 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:43.679439 | orchestrator | 2026-01-01 03:26:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:43.681590 | orchestrator | 2026-01-01 03:26:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:43.681649 | orchestrator | 2026-01-01 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:46.734395 | orchestrator | 2026-01-01 03:26:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:46.735331 | orchestrator | 2026-01-01 03:26:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:46.735368 | orchestrator | 2026-01-01 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:49.776394 | orchestrator | 2026-01-01 03:26:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:49.777312 | orchestrator | 2026-01-01 03:26:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:49.777359 | orchestrator | 2026-01-01 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:52.837191 | orchestrator | 2026-01-01 03:26:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:52.841609 | orchestrator | 2026-01-01 03:26:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:52.841679 | orchestrator | 2026-01-01 03:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:55.898574 | orchestrator | 2026-01-01 03:26:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:55.899818 | orchestrator | 2026-01-01 03:26:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:55.899862 | orchestrator | 2026-01-01 03:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:26:58.947783 | orchestrator | 2026-01-01 03:26:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:26:58.950155 | orchestrator | 2026-01-01 03:26:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:26:58.950232 | orchestrator | 2026-01-01 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:02.007619 | orchestrator | 2026-01-01 03:27:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:02.009228 | orchestrator | 2026-01-01 03:27:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:02.009271 | orchestrator | 2026-01-01 03:27:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:05.057563 | orchestrator | 2026-01-01 03:27:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:05.061817 | orchestrator | 2026-01-01 03:27:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:05.061847 | orchestrator | 2026-01-01 03:27:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:08.104121 | orchestrator | 2026-01-01 03:27:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:08.105717 | orchestrator | 2026-01-01 03:27:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:08.105770 | orchestrator | 2026-01-01 03:27:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:11.151697 | orchestrator | 2026-01-01 03:27:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:11.153146 | orchestrator | 2026-01-01 03:27:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:11.153229 | orchestrator | 2026-01-01 03:27:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:14.191184 | orchestrator | 2026-01-01 03:27:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:14.191973 | orchestrator | 2026-01-01 03:27:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:14.192013 | orchestrator | 2026-01-01 03:27:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:17.231911 | orchestrator | 2026-01-01 03:27:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:17.233926 | orchestrator | 2026-01-01 03:27:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:17.233967 | orchestrator | 2026-01-01 03:27:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:20.280421 | orchestrator | 2026-01-01 03:27:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:20.281885 | orchestrator | 2026-01-01 03:27:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:20.281926 | orchestrator | 2026-01-01 03:27:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:23.331998 | orchestrator | 2026-01-01 03:27:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:23.332994 | orchestrator | 2026-01-01 03:27:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:23.333018 | orchestrator | 2026-01-01 03:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:26.382998 | orchestrator | 2026-01-01 03:27:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:26.384469 | orchestrator | 2026-01-01 03:27:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:26.384513 | orchestrator | 2026-01-01 03:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:29.436951 | orchestrator | 2026-01-01 03:27:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:29.438422 | orchestrator | 2026-01-01 03:27:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:29.438467 | orchestrator | 2026-01-01 03:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:32.496230 | orchestrator | 2026-01-01 03:27:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:32.500447 | orchestrator | 2026-01-01 03:27:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:32.500704 | orchestrator | 2026-01-01 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:35.551021 | orchestrator | 2026-01-01 03:27:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:35.552926 | orchestrator | 2026-01-01 03:27:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:35.552983 | orchestrator | 2026-01-01 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:38.598991 | orchestrator | 2026-01-01 03:27:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:38.600034 | orchestrator | 2026-01-01 03:27:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:38.600111 | orchestrator | 2026-01-01 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:41.648068 | orchestrator | 2026-01-01 03:27:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:41.649333 | orchestrator | 2026-01-01 03:27:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:41.649395 | orchestrator | 2026-01-01 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:44.694304 | orchestrator | 2026-01-01 03:27:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:44.695899 | orchestrator | 2026-01-01 03:27:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:44.695941 | orchestrator | 2026-01-01 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:47.753302 | orchestrator | 2026-01-01 03:27:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:47.754751 | orchestrator | 2026-01-01 03:27:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:47.754780 | orchestrator | 2026-01-01 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:50.800878 | orchestrator | 2026-01-01 03:27:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:50.803095 | orchestrator | 2026-01-01 03:27:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:50.803167 | orchestrator | 2026-01-01 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:53.866324 | orchestrator | 2026-01-01 03:27:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:53.868315 | orchestrator | 2026-01-01 03:27:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:53.868435 | orchestrator | 2026-01-01 03:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:56.911413 | orchestrator | 2026-01-01 03:27:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:56.912702 | orchestrator | 2026-01-01 03:27:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:56.912776 | orchestrator | 2026-01-01 03:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:27:59.963836 | orchestrator | 2026-01-01 03:27:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:27:59.966299 | orchestrator | 2026-01-01 03:27:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:27:59.966452 | orchestrator | 2026-01-01 03:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:03.017780 | orchestrator | 2026-01-01 03:28:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:03.020084 | orchestrator | 2026-01-01 03:28:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:03.020145 | orchestrator | 2026-01-01 03:28:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:06.067120 | orchestrator | 2026-01-01 03:28:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:06.073004 | orchestrator | 2026-01-01 03:28:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:06.073089 | orchestrator | 2026-01-01 03:28:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:09.114851 | orchestrator | 2026-01-01 03:28:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:09.116966 | orchestrator | 2026-01-01 03:28:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:09.117012 | orchestrator | 2026-01-01 03:28:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:12.173548 | orchestrator | 2026-01-01 03:28:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:12.176154 | orchestrator | 2026-01-01 03:28:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:12.176235 | orchestrator | 2026-01-01 03:28:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:15.231196 | orchestrator | 2026-01-01 03:28:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:15.234226 | orchestrator | 2026-01-01 03:28:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:15.234306 | orchestrator | 2026-01-01 03:28:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:18.277835 | orchestrator | 2026-01-01 03:28:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:18.280252 | orchestrator | 2026-01-01 03:28:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:18.280361 | orchestrator | 2026-01-01 03:28:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:21.325772 | orchestrator | 2026-01-01 03:28:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:21.328240 | orchestrator | 2026-01-01 03:28:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:21.328301 | orchestrator | 2026-01-01 03:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:24.372348 | orchestrator | 2026-01-01 03:28:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:24.373410 | orchestrator | 2026-01-01 03:28:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:24.373542 | orchestrator | 2026-01-01 03:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:27.417189 | orchestrator | 2026-01-01 03:28:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:27.418887 | orchestrator | 2026-01-01 03:28:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:27.419077 | orchestrator | 2026-01-01 03:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:30.462316 | orchestrator | 2026-01-01 03:28:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:30.465016 | orchestrator | 2026-01-01 03:28:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:30.465065 | orchestrator | 2026-01-01 03:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:33.519490 | orchestrator | 2026-01-01 03:28:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:33.521919 | orchestrator | 2026-01-01 03:28:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:33.521991 | orchestrator | 2026-01-01 03:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:36.568675 | orchestrator | 2026-01-01 03:28:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:36.569834 | orchestrator | 2026-01-01 03:28:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:36.569913 | orchestrator | 2026-01-01 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:39.623628 | orchestrator | 2026-01-01 03:28:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:39.625110 | orchestrator | 2026-01-01 03:28:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:39.625156 | orchestrator | 2026-01-01 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:42.674524 | orchestrator | 2026-01-01 03:28:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:42.676994 | orchestrator | 2026-01-01 03:28:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:42.677060 | orchestrator | 2026-01-01 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:45.729512 | orchestrator | 2026-01-01 03:28:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:45.731192 | orchestrator | 2026-01-01 03:28:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:45.731274 | orchestrator | 2026-01-01 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:48.786404 | orchestrator | 2026-01-01 03:28:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:48.788178 | orchestrator | 2026-01-01 03:28:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:48.788245 | orchestrator | 2026-01-01 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:51.842618 | orchestrator | 2026-01-01 03:28:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:51.844416 | orchestrator | 2026-01-01 03:28:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:51.844492 | orchestrator | 2026-01-01 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:54.904013 | orchestrator | 2026-01-01 03:28:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:54.905606 | orchestrator | 2026-01-01 03:28:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:54.905638 | orchestrator | 2026-01-01 03:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:28:57.976848 | orchestrator | 2026-01-01 03:28:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:28:57.979712 | orchestrator | 2026-01-01 03:28:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:28:57.979872 | orchestrator | 2026-01-01 03:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:01.021326 | orchestrator | 2026-01-01 03:29:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:01.023366 | orchestrator | 2026-01-01 03:29:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:01.023439 | orchestrator | 2026-01-01 03:29:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:04.067645 | orchestrator | 2026-01-01 03:29:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:04.070599 | orchestrator | 2026-01-01 03:29:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:04.070795 | orchestrator | 2026-01-01 03:29:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:07.107434 | orchestrator | 2026-01-01 03:29:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:07.110350 | orchestrator | 2026-01-01 03:29:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:07.110450 | orchestrator | 2026-01-01 03:29:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:10.156110 | orchestrator | 2026-01-01 03:29:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:10.158683 | orchestrator | 2026-01-01 03:29:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:10.158776 | orchestrator | 2026-01-01 03:29:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:13.207449 | orchestrator | 2026-01-01 03:29:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:13.209635 | orchestrator | 2026-01-01 03:29:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:13.209717 | orchestrator | 2026-01-01 03:29:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:16.257321 | orchestrator | 2026-01-01 03:29:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:16.259789 | orchestrator | 2026-01-01 03:29:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:16.259873 | orchestrator | 2026-01-01 03:29:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:19.303022 | orchestrator | 2026-01-01 03:29:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:19.304898 | orchestrator | 2026-01-01 03:29:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:19.304940 | orchestrator | 2026-01-01 03:29:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:22.347734 | orchestrator | 2026-01-01 03:29:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:22.349742 | orchestrator | 2026-01-01 03:29:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:22.349787 | orchestrator | 2026-01-01 03:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:25.395298 | orchestrator | 2026-01-01 03:29:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:25.397180 | orchestrator | 2026-01-01 03:29:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:25.397233 | orchestrator | 2026-01-01 03:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:28.452707 | orchestrator | 2026-01-01 03:29:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:28.454307 | orchestrator | 2026-01-01 03:29:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:28.454343 | orchestrator | 2026-01-01 03:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:31.505060 | orchestrator | 2026-01-01 03:29:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:31.508515 | orchestrator | 2026-01-01 03:29:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:31.508604 | orchestrator | 2026-01-01 03:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:34.562594 | orchestrator | 2026-01-01 03:29:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:34.564418 | orchestrator | 2026-01-01 03:29:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:34.564454 | orchestrator | 2026-01-01 03:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:37.600645 | orchestrator | 2026-01-01 03:29:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:37.601854 | orchestrator | 2026-01-01 03:29:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:37.601897 | orchestrator | 2026-01-01 03:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:40.649245 | orchestrator | 2026-01-01 03:29:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:40.652601 | orchestrator | 2026-01-01 03:29:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:40.652642 | orchestrator | 2026-01-01 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:43.691033 | orchestrator | 2026-01-01 03:29:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:43.692264 | orchestrator | 2026-01-01 03:29:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:43.692359 | orchestrator | 2026-01-01 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:46.731189 | orchestrator | 2026-01-01 03:29:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:46.731626 | orchestrator | 2026-01-01 03:29:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:46.731658 | orchestrator | 2026-01-01 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:49.776713 | orchestrator | 2026-01-01 03:29:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:49.779229 | orchestrator | 2026-01-01 03:29:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:49.779279 | orchestrator | 2026-01-01 03:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:52.826545 | orchestrator | 2026-01-01 03:29:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:52.828506 | orchestrator | 2026-01-01 03:29:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:52.828590 | orchestrator | 2026-01-01 03:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:55.875435 | orchestrator | 2026-01-01 03:29:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:55.876370 | orchestrator | 2026-01-01 03:29:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:55.876553 | orchestrator | 2026-01-01 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:29:58.931813 | orchestrator | 2026-01-01 03:29:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:29:58.934182 | orchestrator | 2026-01-01 03:29:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:29:58.934260 | orchestrator | 2026-01-01 03:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:01.988228 | orchestrator | 2026-01-01 03:30:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:01.989871 | orchestrator | 2026-01-01 03:30:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:01.990199 | orchestrator | 2026-01-01 03:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:05.033892 | orchestrator | 2026-01-01 03:30:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:05.037166 | orchestrator | 2026-01-01 03:30:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:05.037211 | orchestrator | 2026-01-01 03:30:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:08.083897 | orchestrator | 2026-01-01 03:30:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:08.088728 | orchestrator | 2026-01-01 03:30:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:08.088797 | orchestrator | 2026-01-01 03:30:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:11.140740 | orchestrator | 2026-01-01 03:30:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:11.141397 | orchestrator | 2026-01-01 03:30:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:11.141631 | orchestrator | 2026-01-01 03:30:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:14.183526 | orchestrator | 2026-01-01 03:30:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:14.186172 | orchestrator | 2026-01-01 03:30:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:14.186237 | orchestrator | 2026-01-01 03:30:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:17.228313 | orchestrator | 2026-01-01 03:30:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:17.230925 | orchestrator | 2026-01-01 03:30:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:17.230979 | orchestrator | 2026-01-01 03:30:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:20.280385 | orchestrator | 2026-01-01 03:30:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:20.281128 | orchestrator | 2026-01-01 03:30:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:20.281154 | orchestrator | 2026-01-01 03:30:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:23.329861 | orchestrator | 2026-01-01 03:30:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:23.330776 | orchestrator | 2026-01-01 03:30:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:23.330822 | orchestrator | 2026-01-01 03:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:26.374239 | orchestrator | 2026-01-01 03:30:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:26.374770 | orchestrator | 2026-01-01 03:30:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:26.374831 | orchestrator | 2026-01-01 03:30:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:29.442576 | orchestrator | 2026-01-01 03:30:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:29.445610 | orchestrator | 2026-01-01 03:30:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:29.445723 | orchestrator | 2026-01-01 03:30:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:32.497382 | orchestrator | 2026-01-01 03:30:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:32.500260 | orchestrator | 2026-01-01 03:30:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:32.500315 | orchestrator | 2026-01-01 03:30:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:35.547705 | orchestrator | 2026-01-01 03:30:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:35.548723 | orchestrator | 2026-01-01 03:30:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:35.548759 | orchestrator | 2026-01-01 03:30:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:38.595650 | orchestrator | 2026-01-01 03:30:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:38.597093 | orchestrator | 2026-01-01 03:30:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:38.597187 | orchestrator | 2026-01-01 03:30:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:41.650217 | orchestrator | 2026-01-01 03:30:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:41.652533 | orchestrator | 2026-01-01 03:30:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:41.652570 | orchestrator | 2026-01-01 03:30:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:44.707966 | orchestrator | 2026-01-01 03:30:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:44.709896 | orchestrator | 2026-01-01 03:30:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:44.710145 | orchestrator | 2026-01-01 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:47.756564 | orchestrator | 2026-01-01 03:30:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:47.759216 | orchestrator | 2026-01-01 03:30:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:47.759455 | orchestrator | 2026-01-01 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:50.802558 | orchestrator | 2026-01-01 03:30:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:50.804246 | orchestrator | 2026-01-01 03:30:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:50.804283 | orchestrator | 2026-01-01 03:30:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:53.852105 | orchestrator | 2026-01-01 03:30:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:53.854276 | orchestrator | 2026-01-01 03:30:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:53.854344 | orchestrator | 2026-01-01 03:30:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:56.902004 | orchestrator | 2026-01-01 03:30:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:56.902830 | orchestrator | 2026-01-01 03:30:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:56.903034 | orchestrator | 2026-01-01 03:30:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:30:59.945861 | orchestrator | 2026-01-01 03:30:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:30:59.948676 | orchestrator | 2026-01-01 03:30:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:30:59.948730 | orchestrator | 2026-01-01 03:30:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:02.998682 | orchestrator | 2026-01-01 03:31:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:02.999235 | orchestrator | 2026-01-01 03:31:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:02.999246 | orchestrator | 2026-01-01 03:31:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:06.052691 | orchestrator | 2026-01-01 03:31:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:06.052806 | orchestrator | 2026-01-01 03:31:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:06.052821 | orchestrator | 2026-01-01 03:31:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:09.095530 | orchestrator | 2026-01-01 03:31:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:09.096728 | orchestrator | 2026-01-01 03:31:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:09.096776 | orchestrator | 2026-01-01 03:31:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:12.146063 | orchestrator | 2026-01-01 03:31:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:12.148113 | orchestrator | 2026-01-01 03:31:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:12.148199 | orchestrator | 2026-01-01 03:31:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:15.204770 | orchestrator | 2026-01-01 03:31:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:15.206315 | orchestrator | 2026-01-01 03:31:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:15.206348 | orchestrator | 2026-01-01 03:31:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:18.257336 | orchestrator | 2026-01-01 03:31:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:18.257648 | orchestrator | 2026-01-01 03:31:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:18.257724 | orchestrator | 2026-01-01 03:31:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:21.309412 | orchestrator | 2026-01-01 03:31:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:21.312695 | orchestrator | 2026-01-01 03:31:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:21.312776 | orchestrator | 2026-01-01 03:31:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:24.366308 | orchestrator | 2026-01-01 03:31:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:24.368790 | orchestrator | 2026-01-01 03:31:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:24.368807 | orchestrator | 2026-01-01 03:31:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:27.415148 | orchestrator | 2026-01-01 03:31:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:27.416069 | orchestrator | 2026-01-01 03:31:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:27.416102 | orchestrator | 2026-01-01 03:31:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:30.460064 | orchestrator | 2026-01-01 03:31:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:30.461287 | orchestrator | 2026-01-01 03:31:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:30.461445 | orchestrator | 2026-01-01 03:31:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:33.516778 | orchestrator | 2026-01-01 03:31:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:33.520100 | orchestrator | 2026-01-01 03:31:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:33.520171 | orchestrator | 2026-01-01 03:31:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:36.577755 | orchestrator | 2026-01-01 03:31:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:36.580822 | orchestrator | 2026-01-01 03:31:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:36.580989 | orchestrator | 2026-01-01 03:31:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:39.627856 | orchestrator | 2026-01-01 03:31:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:39.630765 | orchestrator | 2026-01-01 03:31:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:39.630897 | orchestrator | 2026-01-01 03:31:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:42.670255 | orchestrator | 2026-01-01 03:31:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:42.778853 | orchestrator | 2026-01-01 03:31:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:42.778944 | orchestrator | 2026-01-01 03:31:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:45.712624 | orchestrator | 2026-01-01 03:31:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:45.713587 | orchestrator | 2026-01-01 03:31:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:45.713795 | orchestrator | 2026-01-01 03:31:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:48.770805 | orchestrator | 2026-01-01 03:31:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:48.772886 | orchestrator | 2026-01-01 03:31:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:48.772915 | orchestrator | 2026-01-01 03:31:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:51.822161 | orchestrator | 2026-01-01 03:31:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:51.822951 | orchestrator | 2026-01-01 03:31:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:51.822997 | orchestrator | 2026-01-01 03:31:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:54.875647 | orchestrator | 2026-01-01 03:31:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:54.877096 | orchestrator | 2026-01-01 03:31:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:54.877127 | orchestrator | 2026-01-01 03:31:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:31:57.939521 | orchestrator | 2026-01-01 03:31:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:31:57.941004 | orchestrator | 2026-01-01 03:31:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:31:57.941043 | orchestrator | 2026-01-01 03:31:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:00.993393 | orchestrator | 2026-01-01 03:32:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:00.995386 | orchestrator | 2026-01-01 03:32:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:00.995465 | orchestrator | 2026-01-01 03:32:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:04.050557 | orchestrator | 2026-01-01 03:32:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:04.053672 | orchestrator | 2026-01-01 03:32:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:04.053704 | orchestrator | 2026-01-01 03:32:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:07.097738 | orchestrator | 2026-01-01 03:32:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:07.099939 | orchestrator | 2026-01-01 03:32:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:07.100075 | orchestrator | 2026-01-01 03:32:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:10.153941 | orchestrator | 2026-01-01 03:32:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:10.157364 | orchestrator | 2026-01-01 03:32:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:10.157848 | orchestrator | 2026-01-01 03:32:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:13.209026 | orchestrator | 2026-01-01 03:32:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:13.211175 | orchestrator | 2026-01-01 03:32:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:13.211223 | orchestrator | 2026-01-01 03:32:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:16.265914 | orchestrator | 2026-01-01 03:32:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:16.267934 | orchestrator | 2026-01-01 03:32:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:16.268129 | orchestrator | 2026-01-01 03:32:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:19.318608 | orchestrator | 2026-01-01 03:32:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:19.323924 | orchestrator | 2026-01-01 03:32:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:19.324289 | orchestrator | 2026-01-01 03:32:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:22.363849 | orchestrator | 2026-01-01 03:32:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:22.366533 | orchestrator | 2026-01-01 03:32:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:22.366577 | orchestrator | 2026-01-01 03:32:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:25.424730 | orchestrator | 2026-01-01 03:32:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:25.425862 | orchestrator | 2026-01-01 03:32:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:25.425896 | orchestrator | 2026-01-01 03:32:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:28.474127 | orchestrator | 2026-01-01 03:32:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:28.478331 | orchestrator | 2026-01-01 03:32:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:28.478377 | orchestrator | 2026-01-01 03:32:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:31.541780 | orchestrator | 2026-01-01 03:32:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:31.544386 | orchestrator | 2026-01-01 03:32:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:31.544424 | orchestrator | 2026-01-01 03:32:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:34.602132 | orchestrator | 2026-01-01 03:32:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:34.604824 | orchestrator | 2026-01-01 03:32:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:34.604867 | orchestrator | 2026-01-01 03:32:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:37.655800 | orchestrator | 2026-01-01 03:32:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:37.658250 | orchestrator | 2026-01-01 03:32:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:37.658413 | orchestrator | 2026-01-01 03:32:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:40.706823 | orchestrator | 2026-01-01 03:32:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:40.708213 | orchestrator | 2026-01-01 03:32:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:40.708244 | orchestrator | 2026-01-01 03:32:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:43.754748 | orchestrator | 2026-01-01 03:32:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:43.756237 | orchestrator | 2026-01-01 03:32:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:43.756400 | orchestrator | 2026-01-01 03:32:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:46.800917 | orchestrator | 2026-01-01 03:32:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:46.803349 | orchestrator | 2026-01-01 03:32:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:46.803384 | orchestrator | 2026-01-01 03:32:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:49.851489 | orchestrator | 2026-01-01 03:32:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:49.853321 | orchestrator | 2026-01-01 03:32:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:49.853383 | orchestrator | 2026-01-01 03:32:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:52.912489 | orchestrator | 2026-01-01 03:32:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:52.915559 | orchestrator | 2026-01-01 03:32:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:52.915579 | orchestrator | 2026-01-01 03:32:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:55.968764 | orchestrator | 2026-01-01 03:32:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:55.970610 | orchestrator | 2026-01-01 03:32:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:55.970658 | orchestrator | 2026-01-01 03:32:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:32:59.025256 | orchestrator | 2026-01-01 03:32:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:32:59.028382 | orchestrator | 2026-01-01 03:32:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:32:59.028968 | orchestrator | 2026-01-01 03:32:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:02.082133 | orchestrator | 2026-01-01 03:33:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:02.083847 | orchestrator | 2026-01-01 03:33:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:02.083909 | orchestrator | 2026-01-01 03:33:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:05.135764 | orchestrator | 2026-01-01 03:33:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:05.137568 | orchestrator | 2026-01-01 03:33:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:05.137644 | orchestrator | 2026-01-01 03:33:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:08.180427 | orchestrator | 2026-01-01 03:33:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:08.181214 | orchestrator | 2026-01-01 03:33:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:08.181247 | orchestrator | 2026-01-01 03:33:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:11.230417 | orchestrator | 2026-01-01 03:33:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:11.232247 | orchestrator | 2026-01-01 03:33:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:11.232308 | orchestrator | 2026-01-01 03:33:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:14.284114 | orchestrator | 2026-01-01 03:33:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:14.286647 | orchestrator | 2026-01-01 03:33:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:14.286976 | orchestrator | 2026-01-01 03:33:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:17.332517 | orchestrator | 2026-01-01 03:33:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:17.334207 | orchestrator | 2026-01-01 03:33:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:17.334246 | orchestrator | 2026-01-01 03:33:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:20.381661 | orchestrator | 2026-01-01 03:33:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:20.382775 | orchestrator | 2026-01-01 03:33:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:20.382813 | orchestrator | 2026-01-01 03:33:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:23.432382 | orchestrator | 2026-01-01 03:33:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:23.433232 | orchestrator | 2026-01-01 03:33:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:23.433321 | orchestrator | 2026-01-01 03:33:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:26.479146 | orchestrator | 2026-01-01 03:33:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:26.479930 | orchestrator | 2026-01-01 03:33:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:26.479963 | orchestrator | 2026-01-01 03:33:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:29.536909 | orchestrator | 2026-01-01 03:33:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:29.538380 | orchestrator | 2026-01-01 03:33:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:29.538488 | orchestrator | 2026-01-01 03:33:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:32.591843 | orchestrator | 2026-01-01 03:33:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:32.594480 | orchestrator | 2026-01-01 03:33:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:32.594530 | orchestrator | 2026-01-01 03:33:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:35.639092 | orchestrator | 2026-01-01 03:33:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:35.641048 | orchestrator | 2026-01-01 03:33:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:35.641157 | orchestrator | 2026-01-01 03:33:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:38.692243 | orchestrator | 2026-01-01 03:33:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:38.693878 | orchestrator | 2026-01-01 03:33:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:38.693919 | orchestrator | 2026-01-01 03:33:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:41.740242 | orchestrator | 2026-01-01 03:33:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:41.741169 | orchestrator | 2026-01-01 03:33:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:41.741201 | orchestrator | 2026-01-01 03:33:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:44.791954 | orchestrator | 2026-01-01 03:33:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:44.794221 | orchestrator | 2026-01-01 03:33:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:44.794386 | orchestrator | 2026-01-01 03:33:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:47.848966 | orchestrator | 2026-01-01 03:33:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:47.850515 | orchestrator | 2026-01-01 03:33:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:47.850570 | orchestrator | 2026-01-01 03:33:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:50.900924 | orchestrator | 2026-01-01 03:33:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:50.902854 | orchestrator | 2026-01-01 03:33:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:50.902891 | orchestrator | 2026-01-01 03:33:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:53.945876 | orchestrator | 2026-01-01 03:33:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:53.947653 | orchestrator | 2026-01-01 03:33:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:53.947710 | orchestrator | 2026-01-01 03:33:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:33:56.991196 | orchestrator | 2026-01-01 03:33:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:33:56.992551 | orchestrator | 2026-01-01 03:33:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:33:56.992600 | orchestrator | 2026-01-01 03:33:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:00.038420 | orchestrator | 2026-01-01 03:34:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:00.039776 | orchestrator | 2026-01-01 03:34:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:00.039809 | orchestrator | 2026-01-01 03:34:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:03.095011 | orchestrator | 2026-01-01 03:34:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:03.097685 | orchestrator | 2026-01-01 03:34:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:03.097738 | orchestrator | 2026-01-01 03:34:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:06.145906 | orchestrator | 2026-01-01 03:34:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:06.146977 | orchestrator | 2026-01-01 03:34:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:06.147015 | orchestrator | 2026-01-01 03:34:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:09.180232 | orchestrator | 2026-01-01 03:34:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:09.182204 | orchestrator | 2026-01-01 03:34:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:09.182240 | orchestrator | 2026-01-01 03:34:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:12.232297 | orchestrator | 2026-01-01 03:34:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:12.234744 | orchestrator | 2026-01-01 03:34:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:12.234795 | orchestrator | 2026-01-01 03:34:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:15.289001 | orchestrator | 2026-01-01 03:34:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:15.290372 | orchestrator | 2026-01-01 03:34:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:15.290400 | orchestrator | 2026-01-01 03:34:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:18.338373 | orchestrator | 2026-01-01 03:34:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:18.340802 | orchestrator | 2026-01-01 03:34:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:18.340835 | orchestrator | 2026-01-01 03:34:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:21.387529 | orchestrator | 2026-01-01 03:34:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:21.390232 | orchestrator | 2026-01-01 03:34:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:21.390304 | orchestrator | 2026-01-01 03:34:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:24.433907 | orchestrator | 2026-01-01 03:34:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:24.437287 | orchestrator | 2026-01-01 03:34:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:24.437328 | orchestrator | 2026-01-01 03:34:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:27.488886 | orchestrator | 2026-01-01 03:34:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:27.492488 | orchestrator | 2026-01-01 03:34:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:27.492530 | orchestrator | 2026-01-01 03:34:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:30.544327 | orchestrator | 2026-01-01 03:34:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:30.546554 | orchestrator | 2026-01-01 03:34:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:30.546588 | orchestrator | 2026-01-01 03:34:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:33.591071 | orchestrator | 2026-01-01 03:34:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:33.592675 | orchestrator | 2026-01-01 03:34:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:33.592732 | orchestrator | 2026-01-01 03:34:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:36.634358 | orchestrator | 2026-01-01 03:34:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:36.636762 | orchestrator | 2026-01-01 03:34:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:36.636813 | orchestrator | 2026-01-01 03:34:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:39.672562 | orchestrator | 2026-01-01 03:34:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:39.675506 | orchestrator | 2026-01-01 03:34:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:39.675580 | orchestrator | 2026-01-01 03:34:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:42.713860 | orchestrator | 2026-01-01 03:34:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:42.714183 | orchestrator | 2026-01-01 03:34:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:42.714377 | orchestrator | 2026-01-01 03:34:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:45.763307 | orchestrator | 2026-01-01 03:34:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:45.767306 | orchestrator | 2026-01-01 03:34:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:45.767363 | orchestrator | 2026-01-01 03:34:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:48.818773 | orchestrator | 2026-01-01 03:34:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:48.820874 | orchestrator | 2026-01-01 03:34:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:48.820910 | orchestrator | 2026-01-01 03:34:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:51.876131 | orchestrator | 2026-01-01 03:34:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:51.878843 | orchestrator | 2026-01-01 03:34:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:51.878956 | orchestrator | 2026-01-01 03:34:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:54.930720 | orchestrator | 2026-01-01 03:34:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:54.932840 | orchestrator | 2026-01-01 03:34:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:54.933164 | orchestrator | 2026-01-01 03:34:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:34:57.986158 | orchestrator | 2026-01-01 03:34:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:34:57.988601 | orchestrator | 2026-01-01 03:34:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:34:57.988657 | orchestrator | 2026-01-01 03:34:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:01.039305 | orchestrator | 2026-01-01 03:35:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:01.040512 | orchestrator | 2026-01-01 03:35:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:01.040626 | orchestrator | 2026-01-01 03:35:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:04.093365 | orchestrator | 2026-01-01 03:35:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:04.097056 | orchestrator | 2026-01-01 03:35:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:04.097107 | orchestrator | 2026-01-01 03:35:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:07.153564 | orchestrator | 2026-01-01 03:35:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:07.155573 | orchestrator | 2026-01-01 03:35:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:07.155621 | orchestrator | 2026-01-01 03:35:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:10.202511 | orchestrator | 2026-01-01 03:35:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:10.203765 | orchestrator | 2026-01-01 03:35:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:10.203857 | orchestrator | 2026-01-01 03:35:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:13.238707 | orchestrator | 2026-01-01 03:35:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:13.239705 | orchestrator | 2026-01-01 03:35:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:13.239748 | orchestrator | 2026-01-01 03:35:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:16.288273 | orchestrator | 2026-01-01 03:35:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:16.290687 | orchestrator | 2026-01-01 03:35:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:16.290776 | orchestrator | 2026-01-01 03:35:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:19.337399 | orchestrator | 2026-01-01 03:35:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:19.339765 | orchestrator | 2026-01-01 03:35:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:19.339803 | orchestrator | 2026-01-01 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:22.392937 | orchestrator | 2026-01-01 03:35:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:22.393375 | orchestrator | 2026-01-01 03:35:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:22.393411 | orchestrator | 2026-01-01 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:25.439587 | orchestrator | 2026-01-01 03:35:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:25.441629 | orchestrator | 2026-01-01 03:35:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:25.441679 | orchestrator | 2026-01-01 03:35:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:28.505629 | orchestrator | 2026-01-01 03:35:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:28.506885 | orchestrator | 2026-01-01 03:35:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:28.507045 | orchestrator | 2026-01-01 03:35:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:31.565133 | orchestrator | 2026-01-01 03:35:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:31.567476 | orchestrator | 2026-01-01 03:35:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:31.567956 | orchestrator | 2026-01-01 03:35:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:34.629881 | orchestrator | 2026-01-01 03:35:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:34.631537 | orchestrator | 2026-01-01 03:35:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:34.631575 | orchestrator | 2026-01-01 03:35:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:37.685915 | orchestrator | 2026-01-01 03:35:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:37.688786 | orchestrator | 2026-01-01 03:35:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:37.688822 | orchestrator | 2026-01-01 03:35:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:40.735554 | orchestrator | 2026-01-01 03:35:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:40.736401 | orchestrator | 2026-01-01 03:35:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:40.736437 | orchestrator | 2026-01-01 03:35:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:43.786367 | orchestrator | 2026-01-01 03:35:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:43.787994 | orchestrator | 2026-01-01 03:35:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:43.788281 | orchestrator | 2026-01-01 03:35:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:46.826332 | orchestrator | 2026-01-01 03:35:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:46.827407 | orchestrator | 2026-01-01 03:35:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:46.827441 | orchestrator | 2026-01-01 03:35:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:49.879693 | orchestrator | 2026-01-01 03:35:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:49.880795 | orchestrator | 2026-01-01 03:35:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:49.880918 | orchestrator | 2026-01-01 03:35:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:52.931894 | orchestrator | 2026-01-01 03:35:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:52.933342 | orchestrator | 2026-01-01 03:35:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:52.933373 | orchestrator | 2026-01-01 03:35:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:55.979765 | orchestrator | 2026-01-01 03:35:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:55.982512 | orchestrator | 2026-01-01 03:35:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:55.983002 | orchestrator | 2026-01-01 03:35:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:35:59.032888 | orchestrator | 2026-01-01 03:35:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:35:59.034580 | orchestrator | 2026-01-01 03:35:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:35:59.034612 | orchestrator | 2026-01-01 03:35:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:02.080681 | orchestrator | 2026-01-01 03:36:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:02.082991 | orchestrator | 2026-01-01 03:36:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:02.083040 | orchestrator | 2026-01-01 03:36:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:05.133601 | orchestrator | 2026-01-01 03:36:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:05.135552 | orchestrator | 2026-01-01 03:36:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:05.135587 | orchestrator | 2026-01-01 03:36:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:08.184008 | orchestrator | 2026-01-01 03:36:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:08.187278 | orchestrator | 2026-01-01 03:36:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:08.187709 | orchestrator | 2026-01-01 03:36:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:11.244130 | orchestrator | 2026-01-01 03:36:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:11.247021 | orchestrator | 2026-01-01 03:36:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:11.247056 | orchestrator | 2026-01-01 03:36:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:14.301130 | orchestrator | 2026-01-01 03:36:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:14.301993 | orchestrator | 2026-01-01 03:36:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:14.302186 | orchestrator | 2026-01-01 03:36:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:17.351892 | orchestrator | 2026-01-01 03:36:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:17.353496 | orchestrator | 2026-01-01 03:36:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:17.353538 | orchestrator | 2026-01-01 03:36:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:20.403297 | orchestrator | 2026-01-01 03:36:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:20.404277 | orchestrator | 2026-01-01 03:36:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:20.404324 | orchestrator | 2026-01-01 03:36:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:23.450572 | orchestrator | 2026-01-01 03:36:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:23.453674 | orchestrator | 2026-01-01 03:36:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:23.453751 | orchestrator | 2026-01-01 03:36:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:26.497373 | orchestrator | 2026-01-01 03:36:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:26.500259 | orchestrator | 2026-01-01 03:36:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:26.500430 | orchestrator | 2026-01-01 03:36:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:29.551612 | orchestrator | 2026-01-01 03:36:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:29.553877 | orchestrator | 2026-01-01 03:36:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:29.553938 | orchestrator | 2026-01-01 03:36:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:32.597866 | orchestrator | 2026-01-01 03:36:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:32.599073 | orchestrator | 2026-01-01 03:36:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:32.599122 | orchestrator | 2026-01-01 03:36:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:35.653599 | orchestrator | 2026-01-01 03:36:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:35.656572 | orchestrator | 2026-01-01 03:36:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:35.656654 | orchestrator | 2026-01-01 03:36:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:38.710501 | orchestrator | 2026-01-01 03:36:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:38.711991 | orchestrator | 2026-01-01 03:36:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:38.712453 | orchestrator | 2026-01-01 03:36:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:41.772015 | orchestrator | 2026-01-01 03:36:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:41.773639 | orchestrator | 2026-01-01 03:36:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:41.773745 | orchestrator | 2026-01-01 03:36:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:44.820848 | orchestrator | 2026-01-01 03:36:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:44.822316 | orchestrator | 2026-01-01 03:36:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:44.822359 | orchestrator | 2026-01-01 03:36:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:47.866669 | orchestrator | 2026-01-01 03:36:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:47.868693 | orchestrator | 2026-01-01 03:36:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:47.868727 | orchestrator | 2026-01-01 03:36:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:50.917768 | orchestrator | 2026-01-01 03:36:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:50.919746 | orchestrator | 2026-01-01 03:36:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:50.919783 | orchestrator | 2026-01-01 03:36:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:53.969130 | orchestrator | 2026-01-01 03:36:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:53.970820 | orchestrator | 2026-01-01 03:36:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:53.970864 | orchestrator | 2026-01-01 03:36:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:36:57.040783 | orchestrator | 2026-01-01 03:36:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:36:57.042508 | orchestrator | 2026-01-01 03:36:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:36:57.042568 | orchestrator | 2026-01-01 03:36:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:00.089112 | orchestrator | 2026-01-01 03:37:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:00.089420 | orchestrator | 2026-01-01 03:37:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:00.089492 | orchestrator | 2026-01-01 03:37:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:03.132282 | orchestrator | 2026-01-01 03:37:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:03.133936 | orchestrator | 2026-01-01 03:37:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:03.133964 | orchestrator | 2026-01-01 03:37:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:06.184663 | orchestrator | 2026-01-01 03:37:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:06.185809 | orchestrator | 2026-01-01 03:37:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:06.186051 | orchestrator | 2026-01-01 03:37:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:09.237536 | orchestrator | 2026-01-01 03:37:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:09.238817 | orchestrator | 2026-01-01 03:37:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:09.238852 | orchestrator | 2026-01-01 03:37:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:12.288778 | orchestrator | 2026-01-01 03:37:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:12.290931 | orchestrator | 2026-01-01 03:37:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:12.291010 | orchestrator | 2026-01-01 03:37:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:15.349041 | orchestrator | 2026-01-01 03:37:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:15.352970 | orchestrator | 2026-01-01 03:37:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:15.353009 | orchestrator | 2026-01-01 03:37:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:18.407672 | orchestrator | 2026-01-01 03:37:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:18.408944 | orchestrator | 2026-01-01 03:37:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:18.408977 | orchestrator | 2026-01-01 03:37:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:21.460493 | orchestrator | 2026-01-01 03:37:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:21.462370 | orchestrator | 2026-01-01 03:37:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:21.462443 | orchestrator | 2026-01-01 03:37:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:24.518490 | orchestrator | 2026-01-01 03:37:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:24.521632 | orchestrator | 2026-01-01 03:37:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:24.521688 | orchestrator | 2026-01-01 03:37:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:27.571549 | orchestrator | 2026-01-01 03:37:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:27.574085 | orchestrator | 2026-01-01 03:37:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:27.574256 | orchestrator | 2026-01-01 03:37:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:30.619100 | orchestrator | 2026-01-01 03:37:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:30.619610 | orchestrator | 2026-01-01 03:37:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:30.619642 | orchestrator | 2026-01-01 03:37:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:33.669391 | orchestrator | 2026-01-01 03:37:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:33.671810 | orchestrator | 2026-01-01 03:37:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:33.671849 | orchestrator | 2026-01-01 03:37:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:36.713035 | orchestrator | 2026-01-01 03:37:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:36.715607 | orchestrator | 2026-01-01 03:37:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:36.715656 | orchestrator | 2026-01-01 03:37:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:39.763553 | orchestrator | 2026-01-01 03:37:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:39.766333 | orchestrator | 2026-01-01 03:37:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:39.766481 | orchestrator | 2026-01-01 03:37:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:42.814913 | orchestrator | 2026-01-01 03:37:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:42.817051 | orchestrator | 2026-01-01 03:37:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:42.817108 | orchestrator | 2026-01-01 03:37:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:45.866783 | orchestrator | 2026-01-01 03:37:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:45.868580 | orchestrator | 2026-01-01 03:37:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:45.868622 | orchestrator | 2026-01-01 03:37:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:48.907405 | orchestrator | 2026-01-01 03:37:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:48.910172 | orchestrator | 2026-01-01 03:37:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:48.910286 | orchestrator | 2026-01-01 03:37:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:51.966490 | orchestrator | 2026-01-01 03:37:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:51.968060 | orchestrator | 2026-01-01 03:37:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:51.968093 | orchestrator | 2026-01-01 03:37:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:55.023753 | orchestrator | 2026-01-01 03:37:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:55.026458 | orchestrator | 2026-01-01 03:37:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:55.026563 | orchestrator | 2026-01-01 03:37:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:37:58.067531 | orchestrator | 2026-01-01 03:37:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:37:58.070467 | orchestrator | 2026-01-01 03:37:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:37:58.070506 | orchestrator | 2026-01-01 03:37:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:01.121291 | orchestrator | 2026-01-01 03:38:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:01.122425 | orchestrator | 2026-01-01 03:38:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:01.122457 | orchestrator | 2026-01-01 03:38:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:04.179181 | orchestrator | 2026-01-01 03:38:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:04.181514 | orchestrator | 2026-01-01 03:38:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:04.181648 | orchestrator | 2026-01-01 03:38:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:07.231358 | orchestrator | 2026-01-01 03:38:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:07.233200 | orchestrator | 2026-01-01 03:38:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:07.233247 | orchestrator | 2026-01-01 03:38:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:10.284733 | orchestrator | 2026-01-01 03:38:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:10.285365 | orchestrator | 2026-01-01 03:38:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:10.285407 | orchestrator | 2026-01-01 03:38:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:13.334108 | orchestrator | 2026-01-01 03:38:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:13.336034 | orchestrator | 2026-01-01 03:38:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:13.336055 | orchestrator | 2026-01-01 03:38:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:16.377972 | orchestrator | 2026-01-01 03:38:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:16.379611 | orchestrator | 2026-01-01 03:38:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:16.379645 | orchestrator | 2026-01-01 03:38:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:19.423253 | orchestrator | 2026-01-01 03:38:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:19.423753 | orchestrator | 2026-01-01 03:38:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:19.423785 | orchestrator | 2026-01-01 03:38:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:22.468904 | orchestrator | 2026-01-01 03:38:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:22.470247 | orchestrator | 2026-01-01 03:38:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:22.470334 | orchestrator | 2026-01-01 03:38:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:25.521234 | orchestrator | 2026-01-01 03:38:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:25.524523 | orchestrator | 2026-01-01 03:38:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:25.524563 | orchestrator | 2026-01-01 03:38:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:28.565650 | orchestrator | 2026-01-01 03:38:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:28.566862 | orchestrator | 2026-01-01 03:38:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:28.567112 | orchestrator | 2026-01-01 03:38:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:31.614493 | orchestrator | 2026-01-01 03:38:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:31.617938 | orchestrator | 2026-01-01 03:38:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:31.618655 | orchestrator | 2026-01-01 03:38:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:34.671404 | orchestrator | 2026-01-01 03:38:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:34.674244 | orchestrator | 2026-01-01 03:38:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:34.674324 | orchestrator | 2026-01-01 03:38:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:37.728234 | orchestrator | 2026-01-01 03:38:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:37.731249 | orchestrator | 2026-01-01 03:38:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:37.731313 | orchestrator | 2026-01-01 03:38:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:40.790131 | orchestrator | 2026-01-01 03:38:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:40.792639 | orchestrator | 2026-01-01 03:38:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:40.792683 | orchestrator | 2026-01-01 03:38:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:43.838740 | orchestrator | 2026-01-01 03:38:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:43.840081 | orchestrator | 2026-01-01 03:38:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:43.840118 | orchestrator | 2026-01-01 03:38:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:46.882882 | orchestrator | 2026-01-01 03:38:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:46.885631 | orchestrator | 2026-01-01 03:38:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:46.885674 | orchestrator | 2026-01-01 03:38:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:49.929911 | orchestrator | 2026-01-01 03:38:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:49.932765 | orchestrator | 2026-01-01 03:38:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:49.932802 | orchestrator | 2026-01-01 03:38:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:52.975548 | orchestrator | 2026-01-01 03:38:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:52.976462 | orchestrator | 2026-01-01 03:38:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:52.976498 | orchestrator | 2026-01-01 03:38:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:56.032310 | orchestrator | 2026-01-01 03:38:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:56.034456 | orchestrator | 2026-01-01 03:38:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:56.034648 | orchestrator | 2026-01-01 03:38:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:38:59.073473 | orchestrator | 2026-01-01 03:38:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:38:59.074441 | orchestrator | 2026-01-01 03:38:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:38:59.074869 | orchestrator | 2026-01-01 03:38:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:02.117841 | orchestrator | 2026-01-01 03:39:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:02.118879 | orchestrator | 2026-01-01 03:39:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:02.118904 | orchestrator | 2026-01-01 03:39:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:05.162788 | orchestrator | 2026-01-01 03:39:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:05.165024 | orchestrator | 2026-01-01 03:39:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:05.165079 | orchestrator | 2026-01-01 03:39:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:08.214154 | orchestrator | 2026-01-01 03:39:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:08.214956 | orchestrator | 2026-01-01 03:39:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:08.214990 | orchestrator | 2026-01-01 03:39:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:11.276854 | orchestrator | 2026-01-01 03:39:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:11.277975 | orchestrator | 2026-01-01 03:39:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:11.278096 | orchestrator | 2026-01-01 03:39:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:14.325089 | orchestrator | 2026-01-01 03:39:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:14.327160 | orchestrator | 2026-01-01 03:39:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:14.327332 | orchestrator | 2026-01-01 03:39:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:17.360889 | orchestrator | 2026-01-01 03:39:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:17.361896 | orchestrator | 2026-01-01 03:39:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:17.361936 | orchestrator | 2026-01-01 03:39:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:20.415580 | orchestrator | 2026-01-01 03:39:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:20.416445 | orchestrator | 2026-01-01 03:39:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:20.416487 | orchestrator | 2026-01-01 03:39:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:23.464883 | orchestrator | 2026-01-01 03:39:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:23.466531 | orchestrator | 2026-01-01 03:39:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:23.466604 | orchestrator | 2026-01-01 03:39:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:26.512582 | orchestrator | 2026-01-01 03:39:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:26.514434 | orchestrator | 2026-01-01 03:39:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:26.514494 | orchestrator | 2026-01-01 03:39:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:29.561088 | orchestrator | 2026-01-01 03:39:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:29.563575 | orchestrator | 2026-01-01 03:39:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:29.563609 | orchestrator | 2026-01-01 03:39:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:32.616386 | orchestrator | 2026-01-01 03:39:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:32.618105 | orchestrator | 2026-01-01 03:39:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:32.618171 | orchestrator | 2026-01-01 03:39:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:35.660227 | orchestrator | 2026-01-01 03:39:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:35.661604 | orchestrator | 2026-01-01 03:39:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:35.661647 | orchestrator | 2026-01-01 03:39:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:38.709590 | orchestrator | 2026-01-01 03:39:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:38.711832 | orchestrator | 2026-01-01 03:39:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:38.711873 | orchestrator | 2026-01-01 03:39:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:41.766055 | orchestrator | 2026-01-01 03:39:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:41.768342 | orchestrator | 2026-01-01 03:39:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:41.768434 | orchestrator | 2026-01-01 03:39:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:44.817357 | orchestrator | 2026-01-01 03:39:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:44.819250 | orchestrator | 2026-01-01 03:39:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:44.819332 | orchestrator | 2026-01-01 03:39:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:47.866596 | orchestrator | 2026-01-01 03:39:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:47.869402 | orchestrator | 2026-01-01 03:39:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:47.869484 | orchestrator | 2026-01-01 03:39:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:50.920577 | orchestrator | 2026-01-01 03:39:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:50.923660 | orchestrator | 2026-01-01 03:39:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:50.923683 | orchestrator | 2026-01-01 03:39:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:53.976919 | orchestrator | 2026-01-01 03:39:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:53.980078 | orchestrator | 2026-01-01 03:39:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:53.980118 | orchestrator | 2026-01-01 03:39:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:39:57.039827 | orchestrator | 2026-01-01 03:39:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:39:57.041311 | orchestrator | 2026-01-01 03:39:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:39:57.041490 | orchestrator | 2026-01-01 03:39:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:00.086582 | orchestrator | 2026-01-01 03:40:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:00.087641 | orchestrator | 2026-01-01 03:40:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:00.087799 | orchestrator | 2026-01-01 03:40:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:03.132454 | orchestrator | 2026-01-01 03:40:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:03.133390 | orchestrator | 2026-01-01 03:40:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:03.133426 | orchestrator | 2026-01-01 03:40:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:06.179357 | orchestrator | 2026-01-01 03:40:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:06.180280 | orchestrator | 2026-01-01 03:40:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:06.180339 | orchestrator | 2026-01-01 03:40:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:09.228673 | orchestrator | 2026-01-01 03:40:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:09.230532 | orchestrator | 2026-01-01 03:40:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:09.230569 | orchestrator | 2026-01-01 03:40:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:12.280171 | orchestrator | 2026-01-01 03:40:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:12.281127 | orchestrator | 2026-01-01 03:40:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:12.281183 | orchestrator | 2026-01-01 03:40:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:15.332857 | orchestrator | 2026-01-01 03:40:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:15.333527 | orchestrator | 2026-01-01 03:40:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:15.333559 | orchestrator | 2026-01-01 03:40:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:18.373999 | orchestrator | 2026-01-01 03:40:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:18.375226 | orchestrator | 2026-01-01 03:40:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:18.375279 | orchestrator | 2026-01-01 03:40:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:21.423221 | orchestrator | 2026-01-01 03:40:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:21.425165 | orchestrator | 2026-01-01 03:40:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:21.425411 | orchestrator | 2026-01-01 03:40:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:24.474669 | orchestrator | 2026-01-01 03:40:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:24.477714 | orchestrator | 2026-01-01 03:40:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:24.477766 | orchestrator | 2026-01-01 03:40:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:27.527944 | orchestrator | 2026-01-01 03:40:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:27.532910 | orchestrator | 2026-01-01 03:40:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:27.534287 | orchestrator | 2026-01-01 03:40:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:30.578778 | orchestrator | 2026-01-01 03:40:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:30.580576 | orchestrator | 2026-01-01 03:40:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:30.580711 | orchestrator | 2026-01-01 03:40:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:33.626967 | orchestrator | 2026-01-01 03:40:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:33.630071 | orchestrator | 2026-01-01 03:40:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:33.630384 | orchestrator | 2026-01-01 03:40:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:36.683275 | orchestrator | 2026-01-01 03:40:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:36.686007 | orchestrator | 2026-01-01 03:40:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:36.686093 | orchestrator | 2026-01-01 03:40:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:39.726359 | orchestrator | 2026-01-01 03:40:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:39.727170 | orchestrator | 2026-01-01 03:40:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:39.727414 | orchestrator | 2026-01-01 03:40:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:42.772163 | orchestrator | 2026-01-01 03:40:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:42.772909 | orchestrator | 2026-01-01 03:40:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:42.773006 | orchestrator | 2026-01-01 03:40:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:45.823990 | orchestrator | 2026-01-01 03:40:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:45.825653 | orchestrator | 2026-01-01 03:40:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:45.825738 | orchestrator | 2026-01-01 03:40:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:48.879925 | orchestrator | 2026-01-01 03:40:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:48.880234 | orchestrator | 2026-01-01 03:40:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:48.880263 | orchestrator | 2026-01-01 03:40:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:51.920919 | orchestrator | 2026-01-01 03:40:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:51.923134 | orchestrator | 2026-01-01 03:40:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:51.923215 | orchestrator | 2026-01-01 03:40:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:54.973469 | orchestrator | 2026-01-01 03:40:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:54.974873 | orchestrator | 2026-01-01 03:40:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:54.974953 | orchestrator | 2026-01-01 03:40:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:40:58.026109 | orchestrator | 2026-01-01 03:40:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:40:58.028117 | orchestrator | 2026-01-01 03:40:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:40:58.028186 | orchestrator | 2026-01-01 03:40:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:01.075153 | orchestrator | 2026-01-01 03:41:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:01.076256 | orchestrator | 2026-01-01 03:41:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:01.076285 | orchestrator | 2026-01-01 03:41:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:04.119999 | orchestrator | 2026-01-01 03:41:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:04.122236 | orchestrator | 2026-01-01 03:41:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:04.122501 | orchestrator | 2026-01-01 03:41:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:07.167973 | orchestrator | 2026-01-01 03:41:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:07.170688 | orchestrator | 2026-01-01 03:41:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:07.170730 | orchestrator | 2026-01-01 03:41:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:10.216962 | orchestrator | 2026-01-01 03:41:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:10.217898 | orchestrator | 2026-01-01 03:41:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:10.217922 | orchestrator | 2026-01-01 03:41:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:13.268058 | orchestrator | 2026-01-01 03:41:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:13.269956 | orchestrator | 2026-01-01 03:41:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:13.269987 | orchestrator | 2026-01-01 03:41:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:16.311025 | orchestrator | 2026-01-01 03:41:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:16.312600 | orchestrator | 2026-01-01 03:41:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:16.312696 | orchestrator | 2026-01-01 03:41:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:19.361080 | orchestrator | 2026-01-01 03:41:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:19.362894 | orchestrator | 2026-01-01 03:41:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:19.362984 | orchestrator | 2026-01-01 03:41:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:22.402217 | orchestrator | 2026-01-01 03:41:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:22.402592 | orchestrator | 2026-01-01 03:41:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:22.402623 | orchestrator | 2026-01-01 03:41:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:25.448186 | orchestrator | 2026-01-01 03:41:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:25.449915 | orchestrator | 2026-01-01 03:41:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:25.449939 | orchestrator | 2026-01-01 03:41:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:28.501965 | orchestrator | 2026-01-01 03:41:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:28.502602 | orchestrator | 2026-01-01 03:41:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:28.502732 | orchestrator | 2026-01-01 03:41:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:31.548061 | orchestrator | 2026-01-01 03:41:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:31.549988 | orchestrator | 2026-01-01 03:41:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:31.550078 | orchestrator | 2026-01-01 03:41:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:34.597686 | orchestrator | 2026-01-01 03:41:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:34.600155 | orchestrator | 2026-01-01 03:41:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:34.600591 | orchestrator | 2026-01-01 03:41:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:37.656909 | orchestrator | 2026-01-01 03:41:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:37.661646 | orchestrator | 2026-01-01 03:41:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:37.661691 | orchestrator | 2026-01-01 03:41:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:40.710065 | orchestrator | 2026-01-01 03:41:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:40.711407 | orchestrator | 2026-01-01 03:41:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:40.711452 | orchestrator | 2026-01-01 03:41:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:43.764153 | orchestrator | 2026-01-01 03:41:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:43.767822 | orchestrator | 2026-01-01 03:41:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:43.767866 | orchestrator | 2026-01-01 03:41:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:46.817229 | orchestrator | 2026-01-01 03:41:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:46.820559 | orchestrator | 2026-01-01 03:41:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:46.820672 | orchestrator | 2026-01-01 03:41:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:49.862558 | orchestrator | 2026-01-01 03:41:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:49.864970 | orchestrator | 2026-01-01 03:41:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:49.865035 | orchestrator | 2026-01-01 03:41:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:52.912411 | orchestrator | 2026-01-01 03:41:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:52.913352 | orchestrator | 2026-01-01 03:41:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:52.913439 | orchestrator | 2026-01-01 03:41:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:55.960828 | orchestrator | 2026-01-01 03:41:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:55.961900 | orchestrator | 2026-01-01 03:41:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:55.961934 | orchestrator | 2026-01-01 03:41:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:41:59.008917 | orchestrator | 2026-01-01 03:41:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:41:59.011819 | orchestrator | 2026-01-01 03:41:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:41:59.011850 | orchestrator | 2026-01-01 03:41:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:02.064718 | orchestrator | 2026-01-01 03:42:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:02.066270 | orchestrator | 2026-01-01 03:42:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:02.066367 | orchestrator | 2026-01-01 03:42:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:05.110748 | orchestrator | 2026-01-01 03:42:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:05.112551 | orchestrator | 2026-01-01 03:42:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:05.112599 | orchestrator | 2026-01-01 03:42:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:08.152192 | orchestrator | 2026-01-01 03:42:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:08.152400 | orchestrator | 2026-01-01 03:42:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:08.152420 | orchestrator | 2026-01-01 03:42:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:11.208554 | orchestrator | 2026-01-01 03:42:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:11.210249 | orchestrator | 2026-01-01 03:42:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:11.210489 | orchestrator | 2026-01-01 03:42:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:14.265333 | orchestrator | 2026-01-01 03:42:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:14.267846 | orchestrator | 2026-01-01 03:42:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:14.267883 | orchestrator | 2026-01-01 03:42:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:17.326299 | orchestrator | 2026-01-01 03:42:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:17.328045 | orchestrator | 2026-01-01 03:42:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:17.328076 | orchestrator | 2026-01-01 03:42:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:20.377844 | orchestrator | 2026-01-01 03:42:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:20.384151 | orchestrator | 2026-01-01 03:42:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:20.384197 | orchestrator | 2026-01-01 03:42:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:23.433220 | orchestrator | 2026-01-01 03:42:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:23.434319 | orchestrator | 2026-01-01 03:42:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:23.434378 | orchestrator | 2026-01-01 03:42:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:26.481955 | orchestrator | 2026-01-01 03:42:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:26.482636 | orchestrator | 2026-01-01 03:42:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:26.482658 | orchestrator | 2026-01-01 03:42:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:29.535647 | orchestrator | 2026-01-01 03:42:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:29.537922 | orchestrator | 2026-01-01 03:42:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:29.537957 | orchestrator | 2026-01-01 03:42:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:32.590321 | orchestrator | 2026-01-01 03:42:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:32.591273 | orchestrator | 2026-01-01 03:42:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:32.591407 | orchestrator | 2026-01-01 03:42:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:35.636509 | orchestrator | 2026-01-01 03:42:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:35.637692 | orchestrator | 2026-01-01 03:42:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:35.637722 | orchestrator | 2026-01-01 03:42:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:38.680138 | orchestrator | 2026-01-01 03:42:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:38.681111 | orchestrator | 2026-01-01 03:42:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:38.681194 | orchestrator | 2026-01-01 03:42:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:41.726527 | orchestrator | 2026-01-01 03:42:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:41.727992 | orchestrator | 2026-01-01 03:42:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:41.728027 | orchestrator | 2026-01-01 03:42:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:44.771175 | orchestrator | 2026-01-01 03:42:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:44.773736 | orchestrator | 2026-01-01 03:42:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:44.773835 | orchestrator | 2026-01-01 03:42:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:47.820950 | orchestrator | 2026-01-01 03:42:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:47.821632 | orchestrator | 2026-01-01 03:42:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:47.821706 | orchestrator | 2026-01-01 03:42:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:50.873081 | orchestrator | 2026-01-01 03:42:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:50.873908 | orchestrator | 2026-01-01 03:42:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:50.874438 | orchestrator | 2026-01-01 03:42:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:53.923427 | orchestrator | 2026-01-01 03:42:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:53.923813 | orchestrator | 2026-01-01 03:42:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:53.923854 | orchestrator | 2026-01-01 03:42:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:42:56.975677 | orchestrator | 2026-01-01 03:42:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:42:56.977581 | orchestrator | 2026-01-01 03:42:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:42:56.977609 | orchestrator | 2026-01-01 03:42:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:00.036057 | orchestrator | 2026-01-01 03:43:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:00.039612 | orchestrator | 2026-01-01 03:43:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:00.039680 | orchestrator | 2026-01-01 03:43:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:03.095464 | orchestrator | 2026-01-01 03:43:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:03.096526 | orchestrator | 2026-01-01 03:43:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:03.096573 | orchestrator | 2026-01-01 03:43:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:06.138224 | orchestrator | 2026-01-01 03:43:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:06.140577 | orchestrator | 2026-01-01 03:43:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:06.140603 | orchestrator | 2026-01-01 03:43:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:09.184485 | orchestrator | 2026-01-01 03:43:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:09.185673 | orchestrator | 2026-01-01 03:43:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:09.185710 | orchestrator | 2026-01-01 03:43:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:12.232150 | orchestrator | 2026-01-01 03:43:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:12.233705 | orchestrator | 2026-01-01 03:43:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:12.233787 | orchestrator | 2026-01-01 03:43:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:15.284808 | orchestrator | 2026-01-01 03:43:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:15.284969 | orchestrator | 2026-01-01 03:43:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:15.284985 | orchestrator | 2026-01-01 03:43:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:18.335032 | orchestrator | 2026-01-01 03:43:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:18.335602 | orchestrator | 2026-01-01 03:43:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:18.335629 | orchestrator | 2026-01-01 03:43:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:21.372127 | orchestrator | 2026-01-01 03:43:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:21.374195 | orchestrator | 2026-01-01 03:43:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:21.374228 | orchestrator | 2026-01-01 03:43:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:24.419362 | orchestrator | 2026-01-01 03:43:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:24.421548 | orchestrator | 2026-01-01 03:43:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:24.421590 | orchestrator | 2026-01-01 03:43:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:27.472215 | orchestrator | 2026-01-01 03:43:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:27.476809 | orchestrator | 2026-01-01 03:43:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:27.476887 | orchestrator | 2026-01-01 03:43:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:30.525270 | orchestrator | 2026-01-01 03:43:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:30.528405 | orchestrator | 2026-01-01 03:43:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:30.528437 | orchestrator | 2026-01-01 03:43:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:33.584466 | orchestrator | 2026-01-01 03:43:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:33.586899 | orchestrator | 2026-01-01 03:43:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:33.586956 | orchestrator | 2026-01-01 03:43:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:36.634301 | orchestrator | 2026-01-01 03:43:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:36.636715 | orchestrator | 2026-01-01 03:43:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:36.636745 | orchestrator | 2026-01-01 03:43:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:39.688867 | orchestrator | 2026-01-01 03:43:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:39.689791 | orchestrator | 2026-01-01 03:43:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:39.689815 | orchestrator | 2026-01-01 03:43:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:42.747368 | orchestrator | 2026-01-01 03:43:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:42.749107 | orchestrator | 2026-01-01 03:43:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:42.749136 | orchestrator | 2026-01-01 03:43:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:45.802316 | orchestrator | 2026-01-01 03:43:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:45.803748 | orchestrator | 2026-01-01 03:43:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:45.803987 | orchestrator | 2026-01-01 03:43:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:48.851483 | orchestrator | 2026-01-01 03:43:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:48.854116 | orchestrator | 2026-01-01 03:43:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:48.854145 | orchestrator | 2026-01-01 03:43:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:51.908097 | orchestrator | 2026-01-01 03:43:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:51.911118 | orchestrator | 2026-01-01 03:43:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:51.911230 | orchestrator | 2026-01-01 03:43:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:54.967295 | orchestrator | 2026-01-01 03:43:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:54.969119 | orchestrator | 2026-01-01 03:43:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:54.969149 | orchestrator | 2026-01-01 03:43:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:43:58.017152 | orchestrator | 2026-01-01 03:43:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:43:58.018892 | orchestrator | 2026-01-01 03:43:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:43:58.018929 | orchestrator | 2026-01-01 03:43:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:01.065875 | orchestrator | 2026-01-01 03:44:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:01.068213 | orchestrator | 2026-01-01 03:44:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:01.068255 | orchestrator | 2026-01-01 03:44:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:04.124370 | orchestrator | 2026-01-01 03:44:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:04.126158 | orchestrator | 2026-01-01 03:44:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:04.126199 | orchestrator | 2026-01-01 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:07.171067 | orchestrator | 2026-01-01 03:44:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:07.172107 | orchestrator | 2026-01-01 03:44:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:07.172125 | orchestrator | 2026-01-01 03:44:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:10.219169 | orchestrator | 2026-01-01 03:44:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:10.220331 | orchestrator | 2026-01-01 03:44:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:10.220397 | orchestrator | 2026-01-01 03:44:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:13.272318 | orchestrator | 2026-01-01 03:44:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:13.274575 | orchestrator | 2026-01-01 03:44:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:13.274637 | orchestrator | 2026-01-01 03:44:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:16.323983 | orchestrator | 2026-01-01 03:44:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:16.325329 | orchestrator | 2026-01-01 03:44:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:16.325357 | orchestrator | 2026-01-01 03:44:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:19.377560 | orchestrator | 2026-01-01 03:44:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:19.379871 | orchestrator | 2026-01-01 03:44:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:19.379924 | orchestrator | 2026-01-01 03:44:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:22.429586 | orchestrator | 2026-01-01 03:44:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:22.431231 | orchestrator | 2026-01-01 03:44:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:22.431266 | orchestrator | 2026-01-01 03:44:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:25.483233 | orchestrator | 2026-01-01 03:44:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:25.485372 | orchestrator | 2026-01-01 03:44:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:25.485480 | orchestrator | 2026-01-01 03:44:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:28.538213 | orchestrator | 2026-01-01 03:44:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:28.539238 | orchestrator | 2026-01-01 03:44:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:28.539283 | orchestrator | 2026-01-01 03:44:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:31.586105 | orchestrator | 2026-01-01 03:44:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:31.587686 | orchestrator | 2026-01-01 03:44:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:31.587722 | orchestrator | 2026-01-01 03:44:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:34.637978 | orchestrator | 2026-01-01 03:44:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:34.638738 | orchestrator | 2026-01-01 03:44:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:34.639317 | orchestrator | 2026-01-01 03:44:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:37.686987 | orchestrator | 2026-01-01 03:44:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:37.688165 | orchestrator | 2026-01-01 03:44:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:37.688283 | orchestrator | 2026-01-01 03:44:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:40.737692 | orchestrator | 2026-01-01 03:44:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:40.739294 | orchestrator | 2026-01-01 03:44:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:40.739321 | orchestrator | 2026-01-01 03:44:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:43.784777 | orchestrator | 2026-01-01 03:44:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:43.786823 | orchestrator | 2026-01-01 03:44:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:43.786868 | orchestrator | 2026-01-01 03:44:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:46.833343 | orchestrator | 2026-01-01 03:44:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:46.834983 | orchestrator | 2026-01-01 03:44:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:46.835140 | orchestrator | 2026-01-01 03:44:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:49.888543 | orchestrator | 2026-01-01 03:44:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:49.891840 | orchestrator | 2026-01-01 03:44:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:49.891948 | orchestrator | 2026-01-01 03:44:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:52.940705 | orchestrator | 2026-01-01 03:44:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:52.943143 | orchestrator | 2026-01-01 03:44:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:52.943167 | orchestrator | 2026-01-01 03:44:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:55.986218 | orchestrator | 2026-01-01 03:44:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:55.988556 | orchestrator | 2026-01-01 03:44:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:55.988632 | orchestrator | 2026-01-01 03:44:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:44:59.038107 | orchestrator | 2026-01-01 03:44:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:44:59.040085 | orchestrator | 2026-01-01 03:44:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:44:59.040722 | orchestrator | 2026-01-01 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:02.089145 | orchestrator | 2026-01-01 03:45:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:02.091415 | orchestrator | 2026-01-01 03:45:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:02.091542 | orchestrator | 2026-01-01 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:05.141347 | orchestrator | 2026-01-01 03:45:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:05.141558 | orchestrator | 2026-01-01 03:45:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:05.141578 | orchestrator | 2026-01-01 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:08.180338 | orchestrator | 2026-01-01 03:45:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:08.181188 | orchestrator | 2026-01-01 03:45:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:08.181219 | orchestrator | 2026-01-01 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:11.226813 | orchestrator | 2026-01-01 03:45:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:11.228182 | orchestrator | 2026-01-01 03:45:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:11.228227 | orchestrator | 2026-01-01 03:45:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:14.271261 | orchestrator | 2026-01-01 03:45:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:14.271675 | orchestrator | 2026-01-01 03:45:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:14.271767 | orchestrator | 2026-01-01 03:45:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:17.324118 | orchestrator | 2026-01-01 03:45:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:17.326307 | orchestrator | 2026-01-01 03:45:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:17.326472 | orchestrator | 2026-01-01 03:45:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:20.376690 | orchestrator | 2026-01-01 03:45:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:20.377746 | orchestrator | 2026-01-01 03:45:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:20.377775 | orchestrator | 2026-01-01 03:45:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:23.420232 | orchestrator | 2026-01-01 03:45:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:23.420883 | orchestrator | 2026-01-01 03:45:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:23.420932 | orchestrator | 2026-01-01 03:45:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:26.476224 | orchestrator | 2026-01-01 03:45:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:26.477757 | orchestrator | 2026-01-01 03:45:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:26.477788 | orchestrator | 2026-01-01 03:45:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:29.534879 | orchestrator | 2026-01-01 03:45:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:29.534966 | orchestrator | 2026-01-01 03:45:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:29.534980 | orchestrator | 2026-01-01 03:45:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:32.590320 | orchestrator | 2026-01-01 03:45:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:32.592624 | orchestrator | 2026-01-01 03:45:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:32.594558 | orchestrator | 2026-01-01 03:45:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:35.642347 | orchestrator | 2026-01-01 03:45:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:35.642542 | orchestrator | 2026-01-01 03:45:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:35.642557 | orchestrator | 2026-01-01 03:45:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:38.693577 | orchestrator | 2026-01-01 03:45:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:38.695114 | orchestrator | 2026-01-01 03:45:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:38.695257 | orchestrator | 2026-01-01 03:45:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:41.748503 | orchestrator | 2026-01-01 03:45:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:41.751322 | orchestrator | 2026-01-01 03:45:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:41.752330 | orchestrator | 2026-01-01 03:45:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:44.797973 | orchestrator | 2026-01-01 03:45:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:44.798976 | orchestrator | 2026-01-01 03:45:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:44.799037 | orchestrator | 2026-01-01 03:45:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:47.852254 | orchestrator | 2026-01-01 03:45:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:47.854192 | orchestrator | 2026-01-01 03:45:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:47.854227 | orchestrator | 2026-01-01 03:45:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:50.909061 | orchestrator | 2026-01-01 03:45:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:50.910658 | orchestrator | 2026-01-01 03:45:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:50.910701 | orchestrator | 2026-01-01 03:45:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:53.953273 | orchestrator | 2026-01-01 03:45:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:53.954354 | orchestrator | 2026-01-01 03:45:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:53.954572 | orchestrator | 2026-01-01 03:45:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:45:57.001556 | orchestrator | 2026-01-01 03:45:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:45:57.003073 | orchestrator | 2026-01-01 03:45:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:45:57.003097 | orchestrator | 2026-01-01 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:00.051734 | orchestrator | 2026-01-01 03:46:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:00.053399 | orchestrator | 2026-01-01 03:46:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:00.053694 | orchestrator | 2026-01-01 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:03.101234 | orchestrator | 2026-01-01 03:46:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:03.103784 | orchestrator | 2026-01-01 03:46:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:03.103821 | orchestrator | 2026-01-01 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:06.159048 | orchestrator | 2026-01-01 03:46:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:06.160204 | orchestrator | 2026-01-01 03:46:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:06.160285 | orchestrator | 2026-01-01 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:09.207771 | orchestrator | 2026-01-01 03:46:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:09.209523 | orchestrator | 2026-01-01 03:46:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:09.209557 | orchestrator | 2026-01-01 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:12.256342 | orchestrator | 2026-01-01 03:46:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:12.258806 | orchestrator | 2026-01-01 03:46:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:12.258884 | orchestrator | 2026-01-01 03:46:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:15.306759 | orchestrator | 2026-01-01 03:46:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:15.309821 | orchestrator | 2026-01-01 03:46:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:15.309882 | orchestrator | 2026-01-01 03:46:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:18.356403 | orchestrator | 2026-01-01 03:46:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:18.357824 | orchestrator | 2026-01-01 03:46:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:18.357855 | orchestrator | 2026-01-01 03:46:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:21.409831 | orchestrator | 2026-01-01 03:46:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:21.411303 | orchestrator | 2026-01-01 03:46:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:21.411337 | orchestrator | 2026-01-01 03:46:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:24.460471 | orchestrator | 2026-01-01 03:46:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:24.461898 | orchestrator | 2026-01-01 03:46:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:24.461949 | orchestrator | 2026-01-01 03:46:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:27.512094 | orchestrator | 2026-01-01 03:46:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:27.514686 | orchestrator | 2026-01-01 03:46:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:27.514751 | orchestrator | 2026-01-01 03:46:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:30.564904 | orchestrator | 2026-01-01 03:46:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:30.567267 | orchestrator | 2026-01-01 03:46:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:30.567416 | orchestrator | 2026-01-01 03:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:33.614536 | orchestrator | 2026-01-01 03:46:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:33.616690 | orchestrator | 2026-01-01 03:46:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:33.616768 | orchestrator | 2026-01-01 03:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:36.665742 | orchestrator | 2026-01-01 03:46:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:36.666650 | orchestrator | 2026-01-01 03:46:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:36.666683 | orchestrator | 2026-01-01 03:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:39.725109 | orchestrator | 2026-01-01 03:46:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:39.727062 | orchestrator | 2026-01-01 03:46:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:39.727098 | orchestrator | 2026-01-01 03:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:42.782380 | orchestrator | 2026-01-01 03:46:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:42.786255 | orchestrator | 2026-01-01 03:46:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:42.786306 | orchestrator | 2026-01-01 03:46:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:45.840004 | orchestrator | 2026-01-01 03:46:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:45.842116 | orchestrator | 2026-01-01 03:46:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:45.842149 | orchestrator | 2026-01-01 03:46:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:48.891208 | orchestrator | 2026-01-01 03:46:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:48.893490 | orchestrator | 2026-01-01 03:46:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:48.893721 | orchestrator | 2026-01-01 03:46:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:51.940638 | orchestrator | 2026-01-01 03:46:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:51.943947 | orchestrator | 2026-01-01 03:46:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:51.943987 | orchestrator | 2026-01-01 03:46:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:55.005147 | orchestrator | 2026-01-01 03:46:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:55.007051 | orchestrator | 2026-01-01 03:46:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:55.007074 | orchestrator | 2026-01-01 03:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:46:58.058128 | orchestrator | 2026-01-01 03:46:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:46:58.060264 | orchestrator | 2026-01-01 03:46:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:46:58.060286 | orchestrator | 2026-01-01 03:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:01.094084 | orchestrator | 2026-01-01 03:47:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:01.096143 | orchestrator | 2026-01-01 03:47:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:01.096183 | orchestrator | 2026-01-01 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:04.132551 | orchestrator | 2026-01-01 03:47:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:04.133621 | orchestrator | 2026-01-01 03:47:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:04.133654 | orchestrator | 2026-01-01 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:07.177175 | orchestrator | 2026-01-01 03:47:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:07.178604 | orchestrator | 2026-01-01 03:47:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:07.178644 | orchestrator | 2026-01-01 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:10.226412 | orchestrator | 2026-01-01 03:47:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:10.228197 | orchestrator | 2026-01-01 03:47:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:10.228260 | orchestrator | 2026-01-01 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:13.281892 | orchestrator | 2026-01-01 03:47:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:13.282903 | orchestrator | 2026-01-01 03:47:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:13.283569 | orchestrator | 2026-01-01 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:16.344199 | orchestrator | 2026-01-01 03:47:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:16.345787 | orchestrator | 2026-01-01 03:47:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:16.345817 | orchestrator | 2026-01-01 03:47:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:19.399986 | orchestrator | 2026-01-01 03:47:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:19.402741 | orchestrator | 2026-01-01 03:47:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:19.402776 | orchestrator | 2026-01-01 03:47:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:22.452074 | orchestrator | 2026-01-01 03:47:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:22.454681 | orchestrator | 2026-01-01 03:47:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:22.454713 | orchestrator | 2026-01-01 03:47:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:25.502178 | orchestrator | 2026-01-01 03:47:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:25.503729 | orchestrator | 2026-01-01 03:47:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:25.503762 | orchestrator | 2026-01-01 03:47:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:28.550946 | orchestrator | 2026-01-01 03:47:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:28.551690 | orchestrator | 2026-01-01 03:47:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:28.551718 | orchestrator | 2026-01-01 03:47:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:31.602767 | orchestrator | 2026-01-01 03:47:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:31.604337 | orchestrator | 2026-01-01 03:47:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:31.604366 | orchestrator | 2026-01-01 03:47:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:34.649320 | orchestrator | 2026-01-01 03:47:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:34.652509 | orchestrator | 2026-01-01 03:47:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:34.652600 | orchestrator | 2026-01-01 03:47:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:37.701951 | orchestrator | 2026-01-01 03:47:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:37.703262 | orchestrator | 2026-01-01 03:47:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:37.703296 | orchestrator | 2026-01-01 03:47:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:40.746253 | orchestrator | 2026-01-01 03:47:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:40.749233 | orchestrator | 2026-01-01 03:47:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:40.749323 | orchestrator | 2026-01-01 03:47:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:43.801149 | orchestrator | 2026-01-01 03:47:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:43.802617 | orchestrator | 2026-01-01 03:47:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:43.802653 | orchestrator | 2026-01-01 03:47:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:46.858997 | orchestrator | 2026-01-01 03:47:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:46.860761 | orchestrator | 2026-01-01 03:47:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:46.860792 | orchestrator | 2026-01-01 03:47:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:49.900519 | orchestrator | 2026-01-01 03:47:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:49.902095 | orchestrator | 2026-01-01 03:47:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:49.902120 | orchestrator | 2026-01-01 03:47:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:52.953922 | orchestrator | 2026-01-01 03:47:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:52.955456 | orchestrator | 2026-01-01 03:47:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:52.955488 | orchestrator | 2026-01-01 03:47:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:56.025094 | orchestrator | 2026-01-01 03:47:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:56.028710 | orchestrator | 2026-01-01 03:47:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:56.028744 | orchestrator | 2026-01-01 03:47:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:47:59.061069 | orchestrator | 2026-01-01 03:47:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:47:59.062793 | orchestrator | 2026-01-01 03:47:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:47:59.062866 | orchestrator | 2026-01-01 03:47:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:02.101343 | orchestrator | 2026-01-01 03:48:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:02.102882 | orchestrator | 2026-01-01 03:48:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:02.102916 | orchestrator | 2026-01-01 03:48:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:05.145319 | orchestrator | 2026-01-01 03:48:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:05.146673 | orchestrator | 2026-01-01 03:48:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:05.146702 | orchestrator | 2026-01-01 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:08.186459 | orchestrator | 2026-01-01 03:48:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:08.187801 | orchestrator | 2026-01-01 03:48:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:08.187834 | orchestrator | 2026-01-01 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:11.231872 | orchestrator | 2026-01-01 03:48:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:11.232429 | orchestrator | 2026-01-01 03:48:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:11.232574 | orchestrator | 2026-01-01 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:14.278678 | orchestrator | 2026-01-01 03:48:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:14.279671 | orchestrator | 2026-01-01 03:48:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:14.279697 | orchestrator | 2026-01-01 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:17.327505 | orchestrator | 2026-01-01 03:48:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:17.329385 | orchestrator | 2026-01-01 03:48:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:17.329424 | orchestrator | 2026-01-01 03:48:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:20.378836 | orchestrator | 2026-01-01 03:48:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:20.381233 | orchestrator | 2026-01-01 03:48:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:20.381298 | orchestrator | 2026-01-01 03:48:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:23.436841 | orchestrator | 2026-01-01 03:48:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:23.442212 | orchestrator | 2026-01-01 03:48:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:23.442885 | orchestrator | 2026-01-01 03:48:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:26.495568 | orchestrator | 2026-01-01 03:48:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:26.497638 | orchestrator | 2026-01-01 03:48:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:26.498139 | orchestrator | 2026-01-01 03:48:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:29.550998 | orchestrator | 2026-01-01 03:48:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:29.551434 | orchestrator | 2026-01-01 03:48:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:29.551468 | orchestrator | 2026-01-01 03:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:32.602826 | orchestrator | 2026-01-01 03:48:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:32.604561 | orchestrator | 2026-01-01 03:48:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:32.604622 | orchestrator | 2026-01-01 03:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:35.653742 | orchestrator | 2026-01-01 03:48:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:35.656577 | orchestrator | 2026-01-01 03:48:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:35.656625 | orchestrator | 2026-01-01 03:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:38.698173 | orchestrator | 2026-01-01 03:48:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:38.700163 | orchestrator | 2026-01-01 03:48:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:38.700220 | orchestrator | 2026-01-01 03:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:41.747210 | orchestrator | 2026-01-01 03:48:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:41.749649 | orchestrator | 2026-01-01 03:48:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:41.749790 | orchestrator | 2026-01-01 03:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:44.801672 | orchestrator | 2026-01-01 03:48:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:44.802764 | orchestrator | 2026-01-01 03:48:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:44.802845 | orchestrator | 2026-01-01 03:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:47.857229 | orchestrator | 2026-01-01 03:48:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:47.858947 | orchestrator | 2026-01-01 03:48:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:47.859148 | orchestrator | 2026-01-01 03:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:50.902570 | orchestrator | 2026-01-01 03:48:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:50.904925 | orchestrator | 2026-01-01 03:48:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:50.904951 | orchestrator | 2026-01-01 03:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:53.949436 | orchestrator | 2026-01-01 03:48:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:53.950882 | orchestrator | 2026-01-01 03:48:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:53.950966 | orchestrator | 2026-01-01 03:48:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:48:56.998539 | orchestrator | 2026-01-01 03:48:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:48:56.999190 | orchestrator | 2026-01-01 03:48:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:48:56.999217 | orchestrator | 2026-01-01 03:48:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:00.049105 | orchestrator | 2026-01-01 03:49:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:00.051341 | orchestrator | 2026-01-01 03:49:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:00.051368 | orchestrator | 2026-01-01 03:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:03.102172 | orchestrator | 2026-01-01 03:49:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:03.103746 | orchestrator | 2026-01-01 03:49:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:03.103776 | orchestrator | 2026-01-01 03:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:06.152993 | orchestrator | 2026-01-01 03:49:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:06.153828 | orchestrator | 2026-01-01 03:49:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:06.153903 | orchestrator | 2026-01-01 03:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:09.202323 | orchestrator | 2026-01-01 03:49:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:09.203605 | orchestrator | 2026-01-01 03:49:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:09.203694 | orchestrator | 2026-01-01 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:12.246382 | orchestrator | 2026-01-01 03:49:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:12.247920 | orchestrator | 2026-01-01 03:49:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:12.247947 | orchestrator | 2026-01-01 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:15.287589 | orchestrator | 2026-01-01 03:49:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:15.288707 | orchestrator | 2026-01-01 03:49:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:15.288729 | orchestrator | 2026-01-01 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:18.340592 | orchestrator | 2026-01-01 03:49:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:18.342765 | orchestrator | 2026-01-01 03:49:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:18.342803 | orchestrator | 2026-01-01 03:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:21.391283 | orchestrator | 2026-01-01 03:49:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:21.393285 | orchestrator | 2026-01-01 03:49:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:21.393339 | orchestrator | 2026-01-01 03:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:24.444502 | orchestrator | 2026-01-01 03:49:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:24.447096 | orchestrator | 2026-01-01 03:49:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:24.447128 | orchestrator | 2026-01-01 03:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:27.499678 | orchestrator | 2026-01-01 03:49:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:27.501533 | orchestrator | 2026-01-01 03:49:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:27.501566 | orchestrator | 2026-01-01 03:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:30.551037 | orchestrator | 2026-01-01 03:49:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:30.553216 | orchestrator | 2026-01-01 03:49:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:30.553247 | orchestrator | 2026-01-01 03:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:33.609022 | orchestrator | 2026-01-01 03:49:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:33.609804 | orchestrator | 2026-01-01 03:49:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:33.609835 | orchestrator | 2026-01-01 03:49:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:36.659840 | orchestrator | 2026-01-01 03:49:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:36.662288 | orchestrator | 2026-01-01 03:49:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:36.662459 | orchestrator | 2026-01-01 03:49:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:39.706897 | orchestrator | 2026-01-01 03:49:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:39.708933 | orchestrator | 2026-01-01 03:49:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:39.709115 | orchestrator | 2026-01-01 03:49:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:42.761473 | orchestrator | 2026-01-01 03:49:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:42.763213 | orchestrator | 2026-01-01 03:49:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:42.763247 | orchestrator | 2026-01-01 03:49:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:45.813477 | orchestrator | 2026-01-01 03:49:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:45.815583 | orchestrator | 2026-01-01 03:49:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:45.815997 | orchestrator | 2026-01-01 03:49:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:48.869738 | orchestrator | 2026-01-01 03:49:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:48.871256 | orchestrator | 2026-01-01 03:49:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:48.871284 | orchestrator | 2026-01-01 03:49:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:51.922525 | orchestrator | 2026-01-01 03:49:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:51.925321 | orchestrator | 2026-01-01 03:49:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:51.925363 | orchestrator | 2026-01-01 03:49:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:54.974006 | orchestrator | 2026-01-01 03:49:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:54.976652 | orchestrator | 2026-01-01 03:49:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:54.976714 | orchestrator | 2026-01-01 03:49:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:49:58.034630 | orchestrator | 2026-01-01 03:49:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:49:58.036413 | orchestrator | 2026-01-01 03:49:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:49:58.036448 | orchestrator | 2026-01-01 03:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:01.086199 | orchestrator | 2026-01-01 03:50:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:01.088194 | orchestrator | 2026-01-01 03:50:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:01.088364 | orchestrator | 2026-01-01 03:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:04.130108 | orchestrator | 2026-01-01 03:50:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:04.130748 | orchestrator | 2026-01-01 03:50:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:04.130776 | orchestrator | 2026-01-01 03:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:07.165985 | orchestrator | 2026-01-01 03:50:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:07.166816 | orchestrator | 2026-01-01 03:50:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:07.166846 | orchestrator | 2026-01-01 03:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:10.202086 | orchestrator | 2026-01-01 03:50:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:10.203467 | orchestrator | 2026-01-01 03:50:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:10.203620 | orchestrator | 2026-01-01 03:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:13.243701 | orchestrator | 2026-01-01 03:50:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:13.245852 | orchestrator | 2026-01-01 03:50:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:13.245982 | orchestrator | 2026-01-01 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:16.294314 | orchestrator | 2026-01-01 03:50:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:16.296891 | orchestrator | 2026-01-01 03:50:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:16.297039 | orchestrator | 2026-01-01 03:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:19.347100 | orchestrator | 2026-01-01 03:50:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:19.348224 | orchestrator | 2026-01-01 03:50:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:19.348254 | orchestrator | 2026-01-01 03:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:22.383544 | orchestrator | 2026-01-01 03:50:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:22.384948 | orchestrator | 2026-01-01 03:50:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:22.384980 | orchestrator | 2026-01-01 03:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:25.428728 | orchestrator | 2026-01-01 03:50:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:25.429377 | orchestrator | 2026-01-01 03:50:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:25.429411 | orchestrator | 2026-01-01 03:50:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:28.474411 | orchestrator | 2026-01-01 03:50:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:28.476853 | orchestrator | 2026-01-01 03:50:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:28.477263 | orchestrator | 2026-01-01 03:50:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:31.523739 | orchestrator | 2026-01-01 03:50:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:31.525802 | orchestrator | 2026-01-01 03:50:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:31.525834 | orchestrator | 2026-01-01 03:50:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:34.572423 | orchestrator | 2026-01-01 03:50:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:34.574502 | orchestrator | 2026-01-01 03:50:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:34.574544 | orchestrator | 2026-01-01 03:50:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:37.613578 | orchestrator | 2026-01-01 03:50:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:37.615594 | orchestrator | 2026-01-01 03:50:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:37.615654 | orchestrator | 2026-01-01 03:50:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:40.659540 | orchestrator | 2026-01-01 03:50:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:40.659943 | orchestrator | 2026-01-01 03:50:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:40.659972 | orchestrator | 2026-01-01 03:50:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:43.693980 | orchestrator | 2026-01-01 03:50:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:43.695240 | orchestrator | 2026-01-01 03:50:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:43.695273 | orchestrator | 2026-01-01 03:50:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:46.731506 | orchestrator | 2026-01-01 03:50:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:46.733206 | orchestrator | 2026-01-01 03:50:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:46.733238 | orchestrator | 2026-01-01 03:50:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:49.788055 | orchestrator | 2026-01-01 03:50:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:49.789463 | orchestrator | 2026-01-01 03:50:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:49.789667 | orchestrator | 2026-01-01 03:50:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:52.842867 | orchestrator | 2026-01-01 03:50:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:52.846776 | orchestrator | 2026-01-01 03:50:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:52.846856 | orchestrator | 2026-01-01 03:50:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:55.890328 | orchestrator | 2026-01-01 03:50:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:55.893523 | orchestrator | 2026-01-01 03:50:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:55.893577 | orchestrator | 2026-01-01 03:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:50:58.948910 | orchestrator | 2026-01-01 03:50:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:50:58.951686 | orchestrator | 2026-01-01 03:50:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:50:58.951749 | orchestrator | 2026-01-01 03:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:01.998684 | orchestrator | 2026-01-01 03:51:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:02.002309 | orchestrator | 2026-01-01 03:51:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:02.002393 | orchestrator | 2026-01-01 03:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:05.043345 | orchestrator | 2026-01-01 03:51:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:05.043944 | orchestrator | 2026-01-01 03:51:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:05.043976 | orchestrator | 2026-01-01 03:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:08.093651 | orchestrator | 2026-01-01 03:51:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:08.096908 | orchestrator | 2026-01-01 03:51:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:08.096953 | orchestrator | 2026-01-01 03:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:11.147136 | orchestrator | 2026-01-01 03:51:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:11.148435 | orchestrator | 2026-01-01 03:51:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:11.148479 | orchestrator | 2026-01-01 03:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:14.189263 | orchestrator | 2026-01-01 03:51:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:14.189770 | orchestrator | 2026-01-01 03:51:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:14.189804 | orchestrator | 2026-01-01 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:17.241038 | orchestrator | 2026-01-01 03:51:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:17.242471 | orchestrator | 2026-01-01 03:51:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:17.242489 | orchestrator | 2026-01-01 03:51:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:20.279408 | orchestrator | 2026-01-01 03:51:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:20.279798 | orchestrator | 2026-01-01 03:51:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:20.280232 | orchestrator | 2026-01-01 03:51:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:23.330312 | orchestrator | 2026-01-01 03:51:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:23.332135 | orchestrator | 2026-01-01 03:51:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:23.332173 | orchestrator | 2026-01-01 03:51:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:26.376378 | orchestrator | 2026-01-01 03:51:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:26.378919 | orchestrator | 2026-01-01 03:51:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:26.378966 | orchestrator | 2026-01-01 03:51:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:29.421529 | orchestrator | 2026-01-01 03:51:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:29.422460 | orchestrator | 2026-01-01 03:51:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:29.422496 | orchestrator | 2026-01-01 03:51:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:32.478292 | orchestrator | 2026-01-01 03:51:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:32.481211 | orchestrator | 2026-01-01 03:51:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:32.481312 | orchestrator | 2026-01-01 03:51:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:35.535588 | orchestrator | 2026-01-01 03:51:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:35.537630 | orchestrator | 2026-01-01 03:51:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:35.537652 | orchestrator | 2026-01-01 03:51:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:38.590819 | orchestrator | 2026-01-01 03:51:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:38.592438 | orchestrator | 2026-01-01 03:51:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:38.592471 | orchestrator | 2026-01-01 03:51:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:41.636267 | orchestrator | 2026-01-01 03:51:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:41.637143 | orchestrator | 2026-01-01 03:51:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:41.637184 | orchestrator | 2026-01-01 03:51:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:44.682878 | orchestrator | 2026-01-01 03:51:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:44.685048 | orchestrator | 2026-01-01 03:51:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:44.685101 | orchestrator | 2026-01-01 03:51:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:47.733872 | orchestrator | 2026-01-01 03:51:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:47.735593 | orchestrator | 2026-01-01 03:51:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:47.735625 | orchestrator | 2026-01-01 03:51:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:50.787467 | orchestrator | 2026-01-01 03:51:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:50.790477 | orchestrator | 2026-01-01 03:51:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:50.790565 | orchestrator | 2026-01-01 03:51:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:53.838510 | orchestrator | 2026-01-01 03:51:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:53.839649 | orchestrator | 2026-01-01 03:51:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:53.839686 | orchestrator | 2026-01-01 03:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:56.883134 | orchestrator | 2026-01-01 03:51:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:56.884830 | orchestrator | 2026-01-01 03:51:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:56.884869 | orchestrator | 2026-01-01 03:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:51:59.936293 | orchestrator | 2026-01-01 03:51:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:51:59.937530 | orchestrator | 2026-01-01 03:51:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:51:59.937574 | orchestrator | 2026-01-01 03:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:02.987413 | orchestrator | 2026-01-01 03:52:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:02.988474 | orchestrator | 2026-01-01 03:52:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:02.988515 | orchestrator | 2026-01-01 03:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:06.044667 | orchestrator | 2026-01-01 03:52:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:06.048665 | orchestrator | 2026-01-01 03:52:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:06.048872 | orchestrator | 2026-01-01 03:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:09.101513 | orchestrator | 2026-01-01 03:52:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:09.102653 | orchestrator | 2026-01-01 03:52:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:09.102682 | orchestrator | 2026-01-01 03:52:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:12.148332 | orchestrator | 2026-01-01 03:52:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:12.149375 | orchestrator | 2026-01-01 03:52:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:12.149417 | orchestrator | 2026-01-01 03:52:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:15.192144 | orchestrator | 2026-01-01 03:52:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:15.193640 | orchestrator | 2026-01-01 03:52:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:15.193675 | orchestrator | 2026-01-01 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:18.251282 | orchestrator | 2026-01-01 03:52:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:18.253974 | orchestrator | 2026-01-01 03:52:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:18.253997 | orchestrator | 2026-01-01 03:52:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:21.300318 | orchestrator | 2026-01-01 03:52:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:21.301532 | orchestrator | 2026-01-01 03:52:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:21.301554 | orchestrator | 2026-01-01 03:52:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:24.347142 | orchestrator | 2026-01-01 03:52:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:24.348555 | orchestrator | 2026-01-01 03:52:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:24.348586 | orchestrator | 2026-01-01 03:52:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:27.396008 | orchestrator | 2026-01-01 03:52:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:27.397644 | orchestrator | 2026-01-01 03:52:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:27.397686 | orchestrator | 2026-01-01 03:52:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:30.449948 | orchestrator | 2026-01-01 03:52:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:30.451550 | orchestrator | 2026-01-01 03:52:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:30.451603 | orchestrator | 2026-01-01 03:52:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:33.506176 | orchestrator | 2026-01-01 03:52:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:33.508928 | orchestrator | 2026-01-01 03:52:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:33.508967 | orchestrator | 2026-01-01 03:52:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:36.566850 | orchestrator | 2026-01-01 03:52:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:36.568496 | orchestrator | 2026-01-01 03:52:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:36.568534 | orchestrator | 2026-01-01 03:52:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:39.620836 | orchestrator | 2026-01-01 03:52:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:39.622631 | orchestrator | 2026-01-01 03:52:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:39.622941 | orchestrator | 2026-01-01 03:52:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:42.670996 | orchestrator | 2026-01-01 03:52:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:42.671886 | orchestrator | 2026-01-01 03:52:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:42.671913 | orchestrator | 2026-01-01 03:52:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:45.724433 | orchestrator | 2026-01-01 03:52:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:45.724644 | orchestrator | 2026-01-01 03:52:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:45.725003 | orchestrator | 2026-01-01 03:52:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:48.775321 | orchestrator | 2026-01-01 03:52:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:48.777143 | orchestrator | 2026-01-01 03:52:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:48.777180 | orchestrator | 2026-01-01 03:52:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:51.838907 | orchestrator | 2026-01-01 03:52:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:51.841653 | orchestrator | 2026-01-01 03:52:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:51.841702 | orchestrator | 2026-01-01 03:52:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:54.896753 | orchestrator | 2026-01-01 03:52:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:54.898374 | orchestrator | 2026-01-01 03:52:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:54.898406 | orchestrator | 2026-01-01 03:52:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:52:57.948446 | orchestrator | 2026-01-01 03:52:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:52:57.949554 | orchestrator | 2026-01-01 03:52:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:52:57.949575 | orchestrator | 2026-01-01 03:52:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:00.994940 | orchestrator | 2026-01-01 03:53:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:00.996295 | orchestrator | 2026-01-01 03:53:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:00.996334 | orchestrator | 2026-01-01 03:53:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:04.043179 | orchestrator | 2026-01-01 03:53:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:04.045873 | orchestrator | 2026-01-01 03:53:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:04.045902 | orchestrator | 2026-01-01 03:53:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:07.090552 | orchestrator | 2026-01-01 03:53:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:07.091187 | orchestrator | 2026-01-01 03:53:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:07.091285 | orchestrator | 2026-01-01 03:53:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:10.133323 | orchestrator | 2026-01-01 03:53:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:10.135087 | orchestrator | 2026-01-01 03:53:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:10.135136 | orchestrator | 2026-01-01 03:53:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:13.177233 | orchestrator | 2026-01-01 03:53:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:13.178746 | orchestrator | 2026-01-01 03:53:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:13.178989 | orchestrator | 2026-01-01 03:53:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:16.229960 | orchestrator | 2026-01-01 03:53:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:16.232607 | orchestrator | 2026-01-01 03:53:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:16.232666 | orchestrator | 2026-01-01 03:53:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:19.283760 | orchestrator | 2026-01-01 03:53:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:19.284451 | orchestrator | 2026-01-01 03:53:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:19.284831 | orchestrator | 2026-01-01 03:53:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:22.331058 | orchestrator | 2026-01-01 03:53:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:22.333173 | orchestrator | 2026-01-01 03:53:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:22.333677 | orchestrator | 2026-01-01 03:53:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:25.371156 | orchestrator | 2026-01-01 03:53:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:25.373007 | orchestrator | 2026-01-01 03:53:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:25.373032 | orchestrator | 2026-01-01 03:53:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:28.419455 | orchestrator | 2026-01-01 03:53:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:28.421522 | orchestrator | 2026-01-01 03:53:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:28.421558 | orchestrator | 2026-01-01 03:53:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:31.470103 | orchestrator | 2026-01-01 03:53:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:31.470993 | orchestrator | 2026-01-01 03:53:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:31.471046 | orchestrator | 2026-01-01 03:53:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:34.515264 | orchestrator | 2026-01-01 03:53:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:34.517297 | orchestrator | 2026-01-01 03:53:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:34.517428 | orchestrator | 2026-01-01 03:53:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:37.567355 | orchestrator | 2026-01-01 03:53:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:37.570271 | orchestrator | 2026-01-01 03:53:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:37.570589 | orchestrator | 2026-01-01 03:53:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:40.612072 | orchestrator | 2026-01-01 03:53:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:40.613853 | orchestrator | 2026-01-01 03:53:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:40.613963 | orchestrator | 2026-01-01 03:53:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:43.656904 | orchestrator | 2026-01-01 03:53:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:43.658880 | orchestrator | 2026-01-01 03:53:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:43.658946 | orchestrator | 2026-01-01 03:53:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:46.698885 | orchestrator | 2026-01-01 03:53:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:46.701055 | orchestrator | 2026-01-01 03:53:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:46.701087 | orchestrator | 2026-01-01 03:53:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:49.740188 | orchestrator | 2026-01-01 03:53:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:49.741477 | orchestrator | 2026-01-01 03:53:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:49.741525 | orchestrator | 2026-01-01 03:53:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:52.788960 | orchestrator | 2026-01-01 03:53:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:52.791654 | orchestrator | 2026-01-01 03:53:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:52.791710 | orchestrator | 2026-01-01 03:53:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:55.851123 | orchestrator | 2026-01-01 03:53:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:55.853591 | orchestrator | 2026-01-01 03:53:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:55.853624 | orchestrator | 2026-01-01 03:53:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:53:58.900243 | orchestrator | 2026-01-01 03:53:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:53:58.902753 | orchestrator | 2026-01-01 03:53:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:53:58.902916 | orchestrator | 2026-01-01 03:53:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:01.953411 | orchestrator | 2026-01-01 03:54:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:01.955242 | orchestrator | 2026-01-01 03:54:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:01.955300 | orchestrator | 2026-01-01 03:54:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:05.003113 | orchestrator | 2026-01-01 03:54:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:05.006773 | orchestrator | 2026-01-01 03:54:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:05.006922 | orchestrator | 2026-01-01 03:54:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:08.062165 | orchestrator | 2026-01-01 03:54:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:08.064129 | orchestrator | 2026-01-01 03:54:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:08.064368 | orchestrator | 2026-01-01 03:54:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:11.111208 | orchestrator | 2026-01-01 03:54:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:11.112997 | orchestrator | 2026-01-01 03:54:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:11.113164 | orchestrator | 2026-01-01 03:54:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:14.157041 | orchestrator | 2026-01-01 03:54:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:14.158457 | orchestrator | 2026-01-01 03:54:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:14.158600 | orchestrator | 2026-01-01 03:54:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:17.210579 | orchestrator | 2026-01-01 03:54:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:17.213420 | orchestrator | 2026-01-01 03:54:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:17.213448 | orchestrator | 2026-01-01 03:54:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:20.258363 | orchestrator | 2026-01-01 03:54:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:20.259501 | orchestrator | 2026-01-01 03:54:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:20.259531 | orchestrator | 2026-01-01 03:54:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:23.313187 | orchestrator | 2026-01-01 03:54:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:23.315832 | orchestrator | 2026-01-01 03:54:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:23.315895 | orchestrator | 2026-01-01 03:54:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:26.364384 | orchestrator | 2026-01-01 03:54:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:26.366632 | orchestrator | 2026-01-01 03:54:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:26.366753 | orchestrator | 2026-01-01 03:54:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:29.425513 | orchestrator | 2026-01-01 03:54:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:29.425705 | orchestrator | 2026-01-01 03:54:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:29.426787 | orchestrator | 2026-01-01 03:54:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:32.477664 | orchestrator | 2026-01-01 03:54:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:32.480824 | orchestrator | 2026-01-01 03:54:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:32.480903 | orchestrator | 2026-01-01 03:54:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:35.533432 | orchestrator | 2026-01-01 03:54:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:35.534717 | orchestrator | 2026-01-01 03:54:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:35.534779 | orchestrator | 2026-01-01 03:54:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:38.585707 | orchestrator | 2026-01-01 03:54:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:38.587798 | orchestrator | 2026-01-01 03:54:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:38.587832 | orchestrator | 2026-01-01 03:54:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:41.629606 | orchestrator | 2026-01-01 03:54:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:41.633124 | orchestrator | 2026-01-01 03:54:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:41.633161 | orchestrator | 2026-01-01 03:54:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:44.685208 | orchestrator | 2026-01-01 03:54:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:44.686275 | orchestrator | 2026-01-01 03:54:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:44.686348 | orchestrator | 2026-01-01 03:54:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:47.735654 | orchestrator | 2026-01-01 03:54:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:47.739236 | orchestrator | 2026-01-01 03:54:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:47.739262 | orchestrator | 2026-01-01 03:54:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:50.795546 | orchestrator | 2026-01-01 03:54:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:50.797081 | orchestrator | 2026-01-01 03:54:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:50.797108 | orchestrator | 2026-01-01 03:54:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:53.838070 | orchestrator | 2026-01-01 03:54:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:53.841964 | orchestrator | 2026-01-01 03:54:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:53.842057 | orchestrator | 2026-01-01 03:54:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:56.889041 | orchestrator | 2026-01-01 03:54:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:56.890455 | orchestrator | 2026-01-01 03:54:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:56.890485 | orchestrator | 2026-01-01 03:54:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:54:59.930852 | orchestrator | 2026-01-01 03:54:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:54:59.932926 | orchestrator | 2026-01-01 03:54:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:54:59.933034 | orchestrator | 2026-01-01 03:54:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:02.968238 | orchestrator | 2026-01-01 03:55:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:02.970515 | orchestrator | 2026-01-01 03:55:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:02.970575 | orchestrator | 2026-01-01 03:55:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:06.025777 | orchestrator | 2026-01-01 03:55:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:06.029606 | orchestrator | 2026-01-01 03:55:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:06.029683 | orchestrator | 2026-01-01 03:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:09.075930 | orchestrator | 2026-01-01 03:55:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:09.078426 | orchestrator | 2026-01-01 03:55:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:09.078458 | orchestrator | 2026-01-01 03:55:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:12.124899 | orchestrator | 2026-01-01 03:55:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:12.126609 | orchestrator | 2026-01-01 03:55:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:12.126634 | orchestrator | 2026-01-01 03:55:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:15.181453 | orchestrator | 2026-01-01 03:55:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:15.183674 | orchestrator | 2026-01-01 03:55:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:15.183753 | orchestrator | 2026-01-01 03:55:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:18.226681 | orchestrator | 2026-01-01 03:55:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:18.228153 | orchestrator | 2026-01-01 03:55:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:18.228184 | orchestrator | 2026-01-01 03:55:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:21.268603 | orchestrator | 2026-01-01 03:55:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:21.270105 | orchestrator | 2026-01-01 03:55:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:21.270134 | orchestrator | 2026-01-01 03:55:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:24.321039 | orchestrator | 2026-01-01 03:55:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:24.324500 | orchestrator | 2026-01-01 03:55:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:24.324612 | orchestrator | 2026-01-01 03:55:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:27.365021 | orchestrator | 2026-01-01 03:55:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:27.366619 | orchestrator | 2026-01-01 03:55:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:27.366664 | orchestrator | 2026-01-01 03:55:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:30.422393 | orchestrator | 2026-01-01 03:55:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:30.423671 | orchestrator | 2026-01-01 03:55:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:30.423783 | orchestrator | 2026-01-01 03:55:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:33.470092 | orchestrator | 2026-01-01 03:55:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:33.471859 | orchestrator | 2026-01-01 03:55:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:33.471914 | orchestrator | 2026-01-01 03:55:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:36.517024 | orchestrator | 2026-01-01 03:55:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:36.518252 | orchestrator | 2026-01-01 03:55:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:36.518289 | orchestrator | 2026-01-01 03:55:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:39.574747 | orchestrator | 2026-01-01 03:55:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:39.577768 | orchestrator | 2026-01-01 03:55:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:39.577796 | orchestrator | 2026-01-01 03:55:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:42.620518 | orchestrator | 2026-01-01 03:55:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:42.622409 | orchestrator | 2026-01-01 03:55:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:42.622445 | orchestrator | 2026-01-01 03:55:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:45.675836 | orchestrator | 2026-01-01 03:55:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:45.678791 | orchestrator | 2026-01-01 03:55:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:45.679095 | orchestrator | 2026-01-01 03:55:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:48.728420 | orchestrator | 2026-01-01 03:55:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:48.730422 | orchestrator | 2026-01-01 03:55:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:48.730516 | orchestrator | 2026-01-01 03:55:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:51.779495 | orchestrator | 2026-01-01 03:55:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:51.781253 | orchestrator | 2026-01-01 03:55:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:51.781284 | orchestrator | 2026-01-01 03:55:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:54.827564 | orchestrator | 2026-01-01 03:55:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:54.829712 | orchestrator | 2026-01-01 03:55:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:54.829743 | orchestrator | 2026-01-01 03:55:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:55:57.876288 | orchestrator | 2026-01-01 03:55:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:55:57.876465 | orchestrator | 2026-01-01 03:55:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:55:57.876487 | orchestrator | 2026-01-01 03:55:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:00.926585 | orchestrator | 2026-01-01 03:56:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:00.927712 | orchestrator | 2026-01-01 03:56:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:00.927825 | orchestrator | 2026-01-01 03:56:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:03.979658 | orchestrator | 2026-01-01 03:56:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:03.982100 | orchestrator | 2026-01-01 03:56:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:03.982184 | orchestrator | 2026-01-01 03:56:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:07.054362 | orchestrator | 2026-01-01 03:56:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:07.055216 | orchestrator | 2026-01-01 03:56:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:07.055259 | orchestrator | 2026-01-01 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:10.105679 | orchestrator | 2026-01-01 03:56:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:10.108278 | orchestrator | 2026-01-01 03:56:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:10.108311 | orchestrator | 2026-01-01 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:13.150638 | orchestrator | 2026-01-01 03:56:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:13.153614 | orchestrator | 2026-01-01 03:56:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:13.153678 | orchestrator | 2026-01-01 03:56:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:16.203003 | orchestrator | 2026-01-01 03:56:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:16.205166 | orchestrator | 2026-01-01 03:56:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:16.205197 | orchestrator | 2026-01-01 03:56:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:19.249212 | orchestrator | 2026-01-01 03:56:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:19.251159 | orchestrator | 2026-01-01 03:56:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:19.251213 | orchestrator | 2026-01-01 03:56:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:22.298631 | orchestrator | 2026-01-01 03:56:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:22.300310 | orchestrator | 2026-01-01 03:56:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:22.300356 | orchestrator | 2026-01-01 03:56:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:25.346659 | orchestrator | 2026-01-01 03:56:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:25.348240 | orchestrator | 2026-01-01 03:56:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:25.348296 | orchestrator | 2026-01-01 03:56:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:28.401411 | orchestrator | 2026-01-01 03:56:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:28.403613 | orchestrator | 2026-01-01 03:56:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:28.403665 | orchestrator | 2026-01-01 03:56:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:31.457284 | orchestrator | 2026-01-01 03:56:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:31.458636 | orchestrator | 2026-01-01 03:56:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:31.458743 | orchestrator | 2026-01-01 03:56:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:34.496444 | orchestrator | 2026-01-01 03:56:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:34.498192 | orchestrator | 2026-01-01 03:56:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:34.498248 | orchestrator | 2026-01-01 03:56:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:37.539087 | orchestrator | 2026-01-01 03:56:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:37.543034 | orchestrator | 2026-01-01 03:56:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:37.543082 | orchestrator | 2026-01-01 03:56:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:40.585024 | orchestrator | 2026-01-01 03:56:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:40.588458 | orchestrator | 2026-01-01 03:56:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:40.588495 | orchestrator | 2026-01-01 03:56:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:43.633388 | orchestrator | 2026-01-01 03:56:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:43.639313 | orchestrator | 2026-01-01 03:56:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:43.639566 | orchestrator | 2026-01-01 03:56:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:46.692983 | orchestrator | 2026-01-01 03:56:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:46.694748 | orchestrator | 2026-01-01 03:56:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:46.694780 | orchestrator | 2026-01-01 03:56:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:49.741279 | orchestrator | 2026-01-01 03:56:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:49.743618 | orchestrator | 2026-01-01 03:56:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:49.743650 | orchestrator | 2026-01-01 03:56:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:52.784117 | orchestrator | 2026-01-01 03:56:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:52.787356 | orchestrator | 2026-01-01 03:56:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:52.787596 | orchestrator | 2026-01-01 03:56:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:55.838552 | orchestrator | 2026-01-01 03:56:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:55.841362 | orchestrator | 2026-01-01 03:56:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:55.841440 | orchestrator | 2026-01-01 03:56:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:56:58.886894 | orchestrator | 2026-01-01 03:56:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:56:58.888753 | orchestrator | 2026-01-01 03:56:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:56:58.888776 | orchestrator | 2026-01-01 03:56:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:01.933360 | orchestrator | 2026-01-01 03:57:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:01.934847 | orchestrator | 2026-01-01 03:57:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:01.934904 | orchestrator | 2026-01-01 03:57:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:04.991811 | orchestrator | 2026-01-01 03:57:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:04.995574 | orchestrator | 2026-01-01 03:57:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:04.995643 | orchestrator | 2026-01-01 03:57:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:08.054666 | orchestrator | 2026-01-01 03:57:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:08.058452 | orchestrator | 2026-01-01 03:57:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:08.058491 | orchestrator | 2026-01-01 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:11.094503 | orchestrator | 2026-01-01 03:57:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:11.096220 | orchestrator | 2026-01-01 03:57:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:11.096252 | orchestrator | 2026-01-01 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:14.138301 | orchestrator | 2026-01-01 03:57:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:14.140941 | orchestrator | 2026-01-01 03:57:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:14.141000 | orchestrator | 2026-01-01 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:17.185950 | orchestrator | 2026-01-01 03:57:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:17.188412 | orchestrator | 2026-01-01 03:57:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:17.188700 | orchestrator | 2026-01-01 03:57:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:20.239901 | orchestrator | 2026-01-01 03:57:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:20.242422 | orchestrator | 2026-01-01 03:57:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:20.242498 | orchestrator | 2026-01-01 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:23.289677 | orchestrator | 2026-01-01 03:57:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:23.291451 | orchestrator | 2026-01-01 03:57:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:23.291485 | orchestrator | 2026-01-01 03:57:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:26.337023 | orchestrator | 2026-01-01 03:57:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:26.340719 | orchestrator | 2026-01-01 03:57:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:26.340797 | orchestrator | 2026-01-01 03:57:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:29.384394 | orchestrator | 2026-01-01 03:57:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:29.385836 | orchestrator | 2026-01-01 03:57:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:29.385866 | orchestrator | 2026-01-01 03:57:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:32.431351 | orchestrator | 2026-01-01 03:57:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:32.432047 | orchestrator | 2026-01-01 03:57:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:32.432080 | orchestrator | 2026-01-01 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:35.474405 | orchestrator | 2026-01-01 03:57:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:35.475779 | orchestrator | 2026-01-01 03:57:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:35.475903 | orchestrator | 2026-01-01 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:38.527035 | orchestrator | 2026-01-01 03:57:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:38.528570 | orchestrator | 2026-01-01 03:57:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:38.528601 | orchestrator | 2026-01-01 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:41.577067 | orchestrator | 2026-01-01 03:57:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:41.579269 | orchestrator | 2026-01-01 03:57:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:41.579496 | orchestrator | 2026-01-01 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:44.637922 | orchestrator | 2026-01-01 03:57:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:44.640020 | orchestrator | 2026-01-01 03:57:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:44.640218 | orchestrator | 2026-01-01 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:47.695268 | orchestrator | 2026-01-01 03:57:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:47.698199 | orchestrator | 2026-01-01 03:57:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:47.698251 | orchestrator | 2026-01-01 03:57:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:50.742203 | orchestrator | 2026-01-01 03:57:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:50.744494 | orchestrator | 2026-01-01 03:57:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:50.744528 | orchestrator | 2026-01-01 03:57:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:53.794926 | orchestrator | 2026-01-01 03:57:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:53.798186 | orchestrator | 2026-01-01 03:57:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:53.798240 | orchestrator | 2026-01-01 03:57:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:56.844792 | orchestrator | 2026-01-01 03:57:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:56.847603 | orchestrator | 2026-01-01 03:57:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:56.847716 | orchestrator | 2026-01-01 03:57:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:57:59.903555 | orchestrator | 2026-01-01 03:57:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:57:59.906966 | orchestrator | 2026-01-01 03:57:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:57:59.907049 | orchestrator | 2026-01-01 03:57:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:02.949161 | orchestrator | 2026-01-01 03:58:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:02.950832 | orchestrator | 2026-01-01 03:58:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:02.950921 | orchestrator | 2026-01-01 03:58:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:05.996786 | orchestrator | 2026-01-01 03:58:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:05.998267 | orchestrator | 2026-01-01 03:58:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:05.998304 | orchestrator | 2026-01-01 03:58:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:09.053471 | orchestrator | 2026-01-01 03:58:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:09.055419 | orchestrator | 2026-01-01 03:58:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:09.055540 | orchestrator | 2026-01-01 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:12.097763 | orchestrator | 2026-01-01 03:58:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:12.099245 | orchestrator | 2026-01-01 03:58:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:12.099282 | orchestrator | 2026-01-01 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:15.152504 | orchestrator | 2026-01-01 03:58:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:15.155875 | orchestrator | 2026-01-01 03:58:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:15.155981 | orchestrator | 2026-01-01 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:18.200611 | orchestrator | 2026-01-01 03:58:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:18.203327 | orchestrator | 2026-01-01 03:58:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:18.203370 | orchestrator | 2026-01-01 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:21.248228 | orchestrator | 2026-01-01 03:58:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:21.248924 | orchestrator | 2026-01-01 03:58:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:21.248973 | orchestrator | 2026-01-01 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:24.296710 | orchestrator | 2026-01-01 03:58:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:24.298529 | orchestrator | 2026-01-01 03:58:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:24.298566 | orchestrator | 2026-01-01 03:58:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:27.339094 | orchestrator | 2026-01-01 03:58:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:27.340489 | orchestrator | 2026-01-01 03:58:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:27.340547 | orchestrator | 2026-01-01 03:58:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:30.388243 | orchestrator | 2026-01-01 03:58:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:30.392372 | orchestrator | 2026-01-01 03:58:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:30.392418 | orchestrator | 2026-01-01 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:33.437797 | orchestrator | 2026-01-01 03:58:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:33.438755 | orchestrator | 2026-01-01 03:58:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:33.438814 | orchestrator | 2026-01-01 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:36.489818 | orchestrator | 2026-01-01 03:58:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:36.491144 | orchestrator | 2026-01-01 03:58:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:36.491178 | orchestrator | 2026-01-01 03:58:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:39.540268 | orchestrator | 2026-01-01 03:58:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:39.541761 | orchestrator | 2026-01-01 03:58:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:39.541823 | orchestrator | 2026-01-01 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:42.582308 | orchestrator | 2026-01-01 03:58:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:42.587030 | orchestrator | 2026-01-01 03:58:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:42.587154 | orchestrator | 2026-01-01 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:45.644218 | orchestrator | 2026-01-01 03:58:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:45.646977 | orchestrator | 2026-01-01 03:58:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:45.647014 | orchestrator | 2026-01-01 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:48.701202 | orchestrator | 2026-01-01 03:58:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:48.702770 | orchestrator | 2026-01-01 03:58:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:48.702804 | orchestrator | 2026-01-01 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:51.748597 | orchestrator | 2026-01-01 03:58:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:51.749290 | orchestrator | 2026-01-01 03:58:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:51.749324 | orchestrator | 2026-01-01 03:58:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:54.801849 | orchestrator | 2026-01-01 03:58:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:54.803011 | orchestrator | 2026-01-01 03:58:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:54.803085 | orchestrator | 2026-01-01 03:58:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:58:57.853038 | orchestrator | 2026-01-01 03:58:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:58:57.854722 | orchestrator | 2026-01-01 03:58:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:58:57.854769 | orchestrator | 2026-01-01 03:58:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:00.899808 | orchestrator | 2026-01-01 03:59:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:00.901793 | orchestrator | 2026-01-01 03:59:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:00.902001 | orchestrator | 2026-01-01 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:03.943840 | orchestrator | 2026-01-01 03:59:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:03.945675 | orchestrator | 2026-01-01 03:59:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:03.945723 | orchestrator | 2026-01-01 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:06.989595 | orchestrator | 2026-01-01 03:59:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:06.991098 | orchestrator | 2026-01-01 03:59:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:06.991161 | orchestrator | 2026-01-01 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:10.047390 | orchestrator | 2026-01-01 03:59:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:10.049383 | orchestrator | 2026-01-01 03:59:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:10.049420 | orchestrator | 2026-01-01 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:13.090859 | orchestrator | 2026-01-01 03:59:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:13.092268 | orchestrator | 2026-01-01 03:59:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:13.092299 | orchestrator | 2026-01-01 03:59:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:16.144995 | orchestrator | 2026-01-01 03:59:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:16.146409 | orchestrator | 2026-01-01 03:59:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:16.146472 | orchestrator | 2026-01-01 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:19.196751 | orchestrator | 2026-01-01 03:59:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:19.199385 | orchestrator | 2026-01-01 03:59:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:19.199473 | orchestrator | 2026-01-01 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:22.246968 | orchestrator | 2026-01-01 03:59:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:22.249379 | orchestrator | 2026-01-01 03:59:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:22.249449 | orchestrator | 2026-01-01 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:25.293546 | orchestrator | 2026-01-01 03:59:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:25.294392 | orchestrator | 2026-01-01 03:59:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:25.294891 | orchestrator | 2026-01-01 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:28.350943 | orchestrator | 2026-01-01 03:59:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:28.354222 | orchestrator | 2026-01-01 03:59:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:28.354328 | orchestrator | 2026-01-01 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:31.404980 | orchestrator | 2026-01-01 03:59:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:31.406079 | orchestrator | 2026-01-01 03:59:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:31.406113 | orchestrator | 2026-01-01 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:34.454298 | orchestrator | 2026-01-01 03:59:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:34.454750 | orchestrator | 2026-01-01 03:59:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:34.455257 | orchestrator | 2026-01-01 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:37.501961 | orchestrator | 2026-01-01 03:59:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:37.503187 | orchestrator | 2026-01-01 03:59:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:37.503223 | orchestrator | 2026-01-01 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:40.546426 | orchestrator | 2026-01-01 03:59:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:40.547419 | orchestrator | 2026-01-01 03:59:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:40.547478 | orchestrator | 2026-01-01 03:59:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:43.592326 | orchestrator | 2026-01-01 03:59:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:43.593416 | orchestrator | 2026-01-01 03:59:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:43.593487 | orchestrator | 2026-01-01 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:46.649796 | orchestrator | 2026-01-01 03:59:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:46.652310 | orchestrator | 2026-01-01 03:59:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:46.652357 | orchestrator | 2026-01-01 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:49.704373 | orchestrator | 2026-01-01 03:59:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:49.707172 | orchestrator | 2026-01-01 03:59:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:49.707203 | orchestrator | 2026-01-01 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:52.747792 | orchestrator | 2026-01-01 03:59:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:52.749730 | orchestrator | 2026-01-01 03:59:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:52.749760 | orchestrator | 2026-01-01 03:59:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:55.796558 | orchestrator | 2026-01-01 03:59:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:55.798563 | orchestrator | 2026-01-01 03:59:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:55.798600 | orchestrator | 2026-01-01 03:59:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 03:59:58.854510 | orchestrator | 2026-01-01 03:59:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 03:59:58.858305 | orchestrator | 2026-01-01 03:59:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 03:59:58.858412 | orchestrator | 2026-01-01 03:59:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:01.912218 | orchestrator | 2026-01-01 04:00:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:01.914273 | orchestrator | 2026-01-01 04:00:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:01.914307 | orchestrator | 2026-01-01 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:04.965196 | orchestrator | 2026-01-01 04:00:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:04.967825 | orchestrator | 2026-01-01 04:00:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:04.967861 | orchestrator | 2026-01-01 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:08.020143 | orchestrator | 2026-01-01 04:00:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:08.021536 | orchestrator | 2026-01-01 04:00:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:08.021577 | orchestrator | 2026-01-01 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:11.072829 | orchestrator | 2026-01-01 04:00:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:11.073581 | orchestrator | 2026-01-01 04:00:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:11.073618 | orchestrator | 2026-01-01 04:00:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:14.116505 | orchestrator | 2026-01-01 04:00:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:14.118138 | orchestrator | 2026-01-01 04:00:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:14.118176 | orchestrator | 2026-01-01 04:00:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:17.166746 | orchestrator | 2026-01-01 04:00:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:17.168511 | orchestrator | 2026-01-01 04:00:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:17.168563 | orchestrator | 2026-01-01 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:20.216315 | orchestrator | 2026-01-01 04:00:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:20.218201 | orchestrator | 2026-01-01 04:00:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:20.218240 | orchestrator | 2026-01-01 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:23.260174 | orchestrator | 2026-01-01 04:00:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:23.262139 | orchestrator | 2026-01-01 04:00:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:23.262174 | orchestrator | 2026-01-01 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:26.312069 | orchestrator | 2026-01-01 04:00:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:26.315107 | orchestrator | 2026-01-01 04:00:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:26.315140 | orchestrator | 2026-01-01 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:29.366242 | orchestrator | 2026-01-01 04:00:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:29.368013 | orchestrator | 2026-01-01 04:00:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:29.368050 | orchestrator | 2026-01-01 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:32.423232 | orchestrator | 2026-01-01 04:00:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:32.424137 | orchestrator | 2026-01-01 04:00:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:32.424173 | orchestrator | 2026-01-01 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:35.467169 | orchestrator | 2026-01-01 04:00:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:35.468257 | orchestrator | 2026-01-01 04:00:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:35.468428 | orchestrator | 2026-01-01 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:38.519518 | orchestrator | 2026-01-01 04:00:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:38.521100 | orchestrator | 2026-01-01 04:00:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:38.521133 | orchestrator | 2026-01-01 04:00:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:41.564440 | orchestrator | 2026-01-01 04:00:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:41.565583 | orchestrator | 2026-01-01 04:00:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:41.565617 | orchestrator | 2026-01-01 04:00:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:44.620795 | orchestrator | 2026-01-01 04:00:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:44.625018 | orchestrator | 2026-01-01 04:00:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:44.625055 | orchestrator | 2026-01-01 04:00:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:47.680295 | orchestrator | 2026-01-01 04:00:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:47.681888 | orchestrator | 2026-01-01 04:00:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:47.681924 | orchestrator | 2026-01-01 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:50.735745 | orchestrator | 2026-01-01 04:00:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:50.737752 | orchestrator | 2026-01-01 04:00:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:50.737783 | orchestrator | 2026-01-01 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:53.774500 | orchestrator | 2026-01-01 04:00:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:53.775494 | orchestrator | 2026-01-01 04:00:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:53.775531 | orchestrator | 2026-01-01 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:56.813263 | orchestrator | 2026-01-01 04:00:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:56.813824 | orchestrator | 2026-01-01 04:00:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:56.813943 | orchestrator | 2026-01-01 04:00:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:00:59.862667 | orchestrator | 2026-01-01 04:00:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:00:59.864010 | orchestrator | 2026-01-01 04:00:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:00:59.864065 | orchestrator | 2026-01-01 04:00:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:02.908409 | orchestrator | 2026-01-01 04:01:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:02.910247 | orchestrator | 2026-01-01 04:01:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:02.910372 | orchestrator | 2026-01-01 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:05.958873 | orchestrator | 2026-01-01 04:01:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:05.962004 | orchestrator | 2026-01-01 04:01:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:05.962105 | orchestrator | 2026-01-01 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:09.014491 | orchestrator | 2026-01-01 04:01:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:09.015198 | orchestrator | 2026-01-01 04:01:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:09.015229 | orchestrator | 2026-01-01 04:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:12.067984 | orchestrator | 2026-01-01 04:01:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:12.070733 | orchestrator | 2026-01-01 04:01:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:12.070772 | orchestrator | 2026-01-01 04:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:15.122193 | orchestrator | 2026-01-01 04:01:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:15.125080 | orchestrator | 2026-01-01 04:01:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:15.125318 | orchestrator | 2026-01-01 04:01:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:18.178069 | orchestrator | 2026-01-01 04:01:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:18.180500 | orchestrator | 2026-01-01 04:01:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:18.180524 | orchestrator | 2026-01-01 04:01:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:21.232282 | orchestrator | 2026-01-01 04:01:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:21.234667 | orchestrator | 2026-01-01 04:01:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:21.234703 | orchestrator | 2026-01-01 04:01:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:24.283090 | orchestrator | 2026-01-01 04:01:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:24.285374 | orchestrator | 2026-01-01 04:01:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:24.285533 | orchestrator | 2026-01-01 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:27.335158 | orchestrator | 2026-01-01 04:01:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:27.337755 | orchestrator | 2026-01-01 04:01:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:27.337895 | orchestrator | 2026-01-01 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:30.388642 | orchestrator | 2026-01-01 04:01:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:30.391697 | orchestrator | 2026-01-01 04:01:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:30.392213 | orchestrator | 2026-01-01 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:33.435756 | orchestrator | 2026-01-01 04:01:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:33.437431 | orchestrator | 2026-01-01 04:01:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:33.437488 | orchestrator | 2026-01-01 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:36.479266 | orchestrator | 2026-01-01 04:01:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:36.481575 | orchestrator | 2026-01-01 04:01:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:36.481657 | orchestrator | 2026-01-01 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:39.523350 | orchestrator | 2026-01-01 04:01:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:39.525498 | orchestrator | 2026-01-01 04:01:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:39.525523 | orchestrator | 2026-01-01 04:01:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:42.569471 | orchestrator | 2026-01-01 04:01:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:42.570445 | orchestrator | 2026-01-01 04:01:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:42.570529 | orchestrator | 2026-01-01 04:01:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:45.620640 | orchestrator | 2026-01-01 04:01:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:45.622541 | orchestrator | 2026-01-01 04:01:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:45.622654 | orchestrator | 2026-01-01 04:01:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:48.675667 | orchestrator | 2026-01-01 04:01:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:48.678539 | orchestrator | 2026-01-01 04:01:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:48.678610 | orchestrator | 2026-01-01 04:01:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:51.728242 | orchestrator | 2026-01-01 04:01:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:51.730925 | orchestrator | 2026-01-01 04:01:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:51.731049 | orchestrator | 2026-01-01 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:54.779826 | orchestrator | 2026-01-01 04:01:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:54.780139 | orchestrator | 2026-01-01 04:01:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:54.780166 | orchestrator | 2026-01-01 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:01:57.832516 | orchestrator | 2026-01-01 04:01:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:01:57.835157 | orchestrator | 2026-01-01 04:01:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:01:57.835271 | orchestrator | 2026-01-01 04:01:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:00.886130 | orchestrator | 2026-01-01 04:02:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:00.888603 | orchestrator | 2026-01-01 04:02:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:00.888658 | orchestrator | 2026-01-01 04:02:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:03.940555 | orchestrator | 2026-01-01 04:02:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:03.943313 | orchestrator | 2026-01-01 04:02:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:03.943377 | orchestrator | 2026-01-01 04:02:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:06.993935 | orchestrator | 2026-01-01 04:02:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:06.995689 | orchestrator | 2026-01-01 04:02:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:06.996199 | orchestrator | 2026-01-01 04:02:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:10.043245 | orchestrator | 2026-01-01 04:02:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:10.045046 | orchestrator | 2026-01-01 04:02:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:10.045076 | orchestrator | 2026-01-01 04:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:13.083496 | orchestrator | 2026-01-01 04:02:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:13.085040 | orchestrator | 2026-01-01 04:02:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:13.085083 | orchestrator | 2026-01-01 04:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:16.139215 | orchestrator | 2026-01-01 04:02:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:16.141656 | orchestrator | 2026-01-01 04:02:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:16.142296 | orchestrator | 2026-01-01 04:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:19.191550 | orchestrator | 2026-01-01 04:02:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:19.193493 | orchestrator | 2026-01-01 04:02:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:19.193568 | orchestrator | 2026-01-01 04:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:22.237359 | orchestrator | 2026-01-01 04:02:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:22.239285 | orchestrator | 2026-01-01 04:02:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:22.239390 | orchestrator | 2026-01-01 04:02:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:25.293836 | orchestrator | 2026-01-01 04:02:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:25.294887 | orchestrator | 2026-01-01 04:02:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:25.294918 | orchestrator | 2026-01-01 04:02:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:28.349904 | orchestrator | 2026-01-01 04:02:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:28.351574 | orchestrator | 2026-01-01 04:02:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:28.351766 | orchestrator | 2026-01-01 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:31.407422 | orchestrator | 2026-01-01 04:02:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:31.408401 | orchestrator | 2026-01-01 04:02:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:31.408429 | orchestrator | 2026-01-01 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:34.469055 | orchestrator | 2026-01-01 04:02:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:34.471025 | orchestrator | 2026-01-01 04:02:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:34.471095 | orchestrator | 2026-01-01 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:37.516894 | orchestrator | 2026-01-01 04:02:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:37.518169 | orchestrator | 2026-01-01 04:02:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:37.518194 | orchestrator | 2026-01-01 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:40.572233 | orchestrator | 2026-01-01 04:02:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:40.574202 | orchestrator | 2026-01-01 04:02:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:40.574232 | orchestrator | 2026-01-01 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:43.626464 | orchestrator | 2026-01-01 04:02:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:43.628076 | orchestrator | 2026-01-01 04:02:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:43.628108 | orchestrator | 2026-01-01 04:02:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:46.671977 | orchestrator | 2026-01-01 04:02:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:46.673454 | orchestrator | 2026-01-01 04:02:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:46.673499 | orchestrator | 2026-01-01 04:02:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:49.717415 | orchestrator | 2026-01-01 04:02:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:49.718879 | orchestrator | 2026-01-01 04:02:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:49.718912 | orchestrator | 2026-01-01 04:02:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:52.774632 | orchestrator | 2026-01-01 04:02:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:52.776907 | orchestrator | 2026-01-01 04:02:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:52.777019 | orchestrator | 2026-01-01 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:55.831099 | orchestrator | 2026-01-01 04:02:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:55.833734 | orchestrator | 2026-01-01 04:02:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:55.833838 | orchestrator | 2026-01-01 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:02:58.888024 | orchestrator | 2026-01-01 04:02:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:02:58.889374 | orchestrator | 2026-01-01 04:02:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:02:58.889429 | orchestrator | 2026-01-01 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:01.933806 | orchestrator | 2026-01-01 04:03:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:01.934506 | orchestrator | 2026-01-01 04:03:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:01.934543 | orchestrator | 2026-01-01 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:04.983436 | orchestrator | 2026-01-01 04:03:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:04.985149 | orchestrator | 2026-01-01 04:03:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:04.985180 | orchestrator | 2026-01-01 04:03:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:08.043128 | orchestrator | 2026-01-01 04:03:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:08.044499 | orchestrator | 2026-01-01 04:03:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:08.044725 | orchestrator | 2026-01-01 04:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:11.093154 | orchestrator | 2026-01-01 04:03:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:11.095459 | orchestrator | 2026-01-01 04:03:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:11.095491 | orchestrator | 2026-01-01 04:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:14.144082 | orchestrator | 2026-01-01 04:03:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:14.146005 | orchestrator | 2026-01-01 04:03:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:14.146122 | orchestrator | 2026-01-01 04:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:17.195863 | orchestrator | 2026-01-01 04:03:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:17.198092 | orchestrator | 2026-01-01 04:03:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:17.198157 | orchestrator | 2026-01-01 04:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:20.243147 | orchestrator | 2026-01-01 04:03:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:20.244482 | orchestrator | 2026-01-01 04:03:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:20.244534 | orchestrator | 2026-01-01 04:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:23.284418 | orchestrator | 2026-01-01 04:03:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:23.285105 | orchestrator | 2026-01-01 04:03:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:23.285279 | orchestrator | 2026-01-01 04:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:26.333026 | orchestrator | 2026-01-01 04:03:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:26.335557 | orchestrator | 2026-01-01 04:03:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:26.335762 | orchestrator | 2026-01-01 04:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:29.387374 | orchestrator | 2026-01-01 04:03:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:29.390923 | orchestrator | 2026-01-01 04:03:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:29.390985 | orchestrator | 2026-01-01 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:32.442662 | orchestrator | 2026-01-01 04:03:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:32.442790 | orchestrator | 2026-01-01 04:03:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:32.442820 | orchestrator | 2026-01-01 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:35.498584 | orchestrator | 2026-01-01 04:03:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:35.500968 | orchestrator | 2026-01-01 04:03:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:35.501011 | orchestrator | 2026-01-01 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:38.543787 | orchestrator | 2026-01-01 04:03:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:38.544107 | orchestrator | 2026-01-01 04:03:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:38.544148 | orchestrator | 2026-01-01 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:41.586509 | orchestrator | 2026-01-01 04:03:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:41.590132 | orchestrator | 2026-01-01 04:03:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:41.590172 | orchestrator | 2026-01-01 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:44.645230 | orchestrator | 2026-01-01 04:03:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:44.646983 | orchestrator | 2026-01-01 04:03:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:44.647201 | orchestrator | 2026-01-01 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:47.703245 | orchestrator | 2026-01-01 04:03:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:47.705261 | orchestrator | 2026-01-01 04:03:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:47.705303 | orchestrator | 2026-01-01 04:03:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:50.759398 | orchestrator | 2026-01-01 04:03:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:50.760953 | orchestrator | 2026-01-01 04:03:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:50.761081 | orchestrator | 2026-01-01 04:03:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:53.813796 | orchestrator | 2026-01-01 04:03:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:53.815445 | orchestrator | 2026-01-01 04:03:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:53.815485 | orchestrator | 2026-01-01 04:03:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:56.867721 | orchestrator | 2026-01-01 04:03:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:56.870909 | orchestrator | 2026-01-01 04:03:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:56.870968 | orchestrator | 2026-01-01 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:03:59.924057 | orchestrator | 2026-01-01 04:03:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:03:59.924968 | orchestrator | 2026-01-01 04:03:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:03:59.925012 | orchestrator | 2026-01-01 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:02.972617 | orchestrator | 2026-01-01 04:04:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:02.974547 | orchestrator | 2026-01-01 04:04:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:02.974617 | orchestrator | 2026-01-01 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:06.025004 | orchestrator | 2026-01-01 04:04:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:06.025370 | orchestrator | 2026-01-01 04:04:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:06.025400 | orchestrator | 2026-01-01 04:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:09.072165 | orchestrator | 2026-01-01 04:04:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:09.074865 | orchestrator | 2026-01-01 04:04:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:09.074922 | orchestrator | 2026-01-01 04:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:12.115042 | orchestrator | 2026-01-01 04:04:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:12.115491 | orchestrator | 2026-01-01 04:04:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:12.115523 | orchestrator | 2026-01-01 04:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:15.158218 | orchestrator | 2026-01-01 04:04:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:15.159377 | orchestrator | 2026-01-01 04:04:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:15.159653 | orchestrator | 2026-01-01 04:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:18.205239 | orchestrator | 2026-01-01 04:04:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:18.206232 | orchestrator | 2026-01-01 04:04:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:18.206263 | orchestrator | 2026-01-01 04:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:21.268754 | orchestrator | 2026-01-01 04:04:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:21.271688 | orchestrator | 2026-01-01 04:04:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:21.271756 | orchestrator | 2026-01-01 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:24.318228 | orchestrator | 2026-01-01 04:04:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:24.319578 | orchestrator | 2026-01-01 04:04:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:24.320048 | orchestrator | 2026-01-01 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:27.368174 | orchestrator | 2026-01-01 04:04:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:27.370975 | orchestrator | 2026-01-01 04:04:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:27.373389 | orchestrator | 2026-01-01 04:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:30.424448 | orchestrator | 2026-01-01 04:04:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:30.426106 | orchestrator | 2026-01-01 04:04:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:30.426215 | orchestrator | 2026-01-01 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:33.481065 | orchestrator | 2026-01-01 04:04:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:33.481854 | orchestrator | 2026-01-01 04:04:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:33.481890 | orchestrator | 2026-01-01 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:36.526250 | orchestrator | 2026-01-01 04:04:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:36.527980 | orchestrator | 2026-01-01 04:04:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:36.528253 | orchestrator | 2026-01-01 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:39.573022 | orchestrator | 2026-01-01 04:04:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:39.574325 | orchestrator | 2026-01-01 04:04:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:39.574363 | orchestrator | 2026-01-01 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:42.623135 | orchestrator | 2026-01-01 04:04:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:42.623783 | orchestrator | 2026-01-01 04:04:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:42.623969 | orchestrator | 2026-01-01 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:45.673763 | orchestrator | 2026-01-01 04:04:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:45.675756 | orchestrator | 2026-01-01 04:04:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:45.675797 | orchestrator | 2026-01-01 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:48.720882 | orchestrator | 2026-01-01 04:04:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:48.722140 | orchestrator | 2026-01-01 04:04:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:48.722174 | orchestrator | 2026-01-01 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:51.776972 | orchestrator | 2026-01-01 04:04:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:51.778242 | orchestrator | 2026-01-01 04:04:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:51.779715 | orchestrator | 2026-01-01 04:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:54.824950 | orchestrator | 2026-01-01 04:04:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:54.828116 | orchestrator | 2026-01-01 04:04:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:54.828181 | orchestrator | 2026-01-01 04:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:04:57.880589 | orchestrator | 2026-01-01 04:04:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:04:57.883001 | orchestrator | 2026-01-01 04:04:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:04:57.883045 | orchestrator | 2026-01-01 04:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:00.935457 | orchestrator | 2026-01-01 04:05:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:00.936705 | orchestrator | 2026-01-01 04:05:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:00.936731 | orchestrator | 2026-01-01 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:03.981975 | orchestrator | 2026-01-01 04:05:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:03.984545 | orchestrator | 2026-01-01 04:05:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:03.984591 | orchestrator | 2026-01-01 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:07.054551 | orchestrator | 2026-01-01 04:05:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:07.056126 | orchestrator | 2026-01-01 04:05:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:07.056308 | orchestrator | 2026-01-01 04:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:10.107921 | orchestrator | 2026-01-01 04:05:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:10.108775 | orchestrator | 2026-01-01 04:05:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:10.109113 | orchestrator | 2026-01-01 04:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:13.148604 | orchestrator | 2026-01-01 04:05:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:13.149838 | orchestrator | 2026-01-01 04:05:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:13.149878 | orchestrator | 2026-01-01 04:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:16.195596 | orchestrator | 2026-01-01 04:05:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:16.197592 | orchestrator | 2026-01-01 04:05:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:16.197675 | orchestrator | 2026-01-01 04:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:19.241470 | orchestrator | 2026-01-01 04:05:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:19.243507 | orchestrator | 2026-01-01 04:05:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:19.243536 | orchestrator | 2026-01-01 04:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:22.295011 | orchestrator | 2026-01-01 04:05:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:22.297297 | orchestrator | 2026-01-01 04:05:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:22.297332 | orchestrator | 2026-01-01 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:25.341663 | orchestrator | 2026-01-01 04:05:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:25.342749 | orchestrator | 2026-01-01 04:05:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:25.342780 | orchestrator | 2026-01-01 04:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:28.393038 | orchestrator | 2026-01-01 04:05:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:28.393764 | orchestrator | 2026-01-01 04:05:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:28.393813 | orchestrator | 2026-01-01 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:31.436526 | orchestrator | 2026-01-01 04:05:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:31.438182 | orchestrator | 2026-01-01 04:05:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:31.438198 | orchestrator | 2026-01-01 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:34.487977 | orchestrator | 2026-01-01 04:05:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:34.488367 | orchestrator | 2026-01-01 04:05:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:34.488400 | orchestrator | 2026-01-01 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:37.532100 | orchestrator | 2026-01-01 04:05:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:37.533804 | orchestrator | 2026-01-01 04:05:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:37.534104 | orchestrator | 2026-01-01 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:40.571972 | orchestrator | 2026-01-01 04:05:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:40.572949 | orchestrator | 2026-01-01 04:05:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:40.572984 | orchestrator | 2026-01-01 04:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:43.621167 | orchestrator | 2026-01-01 04:05:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:43.624145 | orchestrator | 2026-01-01 04:05:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:43.624186 | orchestrator | 2026-01-01 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:46.671905 | orchestrator | 2026-01-01 04:05:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:46.673493 | orchestrator | 2026-01-01 04:05:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:46.673528 | orchestrator | 2026-01-01 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:49.720671 | orchestrator | 2026-01-01 04:05:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:49.723605 | orchestrator | 2026-01-01 04:05:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:49.723643 | orchestrator | 2026-01-01 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:52.763584 | orchestrator | 2026-01-01 04:05:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:52.765673 | orchestrator | 2026-01-01 04:05:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:52.765740 | orchestrator | 2026-01-01 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:55.805849 | orchestrator | 2026-01-01 04:05:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:55.809747 | orchestrator | 2026-01-01 04:05:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:55.809792 | orchestrator | 2026-01-01 04:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:05:58.859542 | orchestrator | 2026-01-01 04:05:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:05:58.860552 | orchestrator | 2026-01-01 04:05:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:05:58.860584 | orchestrator | 2026-01-01 04:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:01.910359 | orchestrator | 2026-01-01 04:06:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:01.913377 | orchestrator | 2026-01-01 04:06:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:01.913429 | orchestrator | 2026-01-01 04:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:04.959635 | orchestrator | 2026-01-01 04:06:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:04.961047 | orchestrator | 2026-01-01 04:06:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:04.961083 | orchestrator | 2026-01-01 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:08.015376 | orchestrator | 2026-01-01 04:06:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:08.019634 | orchestrator | 2026-01-01 04:06:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:08.019972 | orchestrator | 2026-01-01 04:06:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:11.075329 | orchestrator | 2026-01-01 04:06:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:11.076666 | orchestrator | 2026-01-01 04:06:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:11.076762 | orchestrator | 2026-01-01 04:06:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:14.127543 | orchestrator | 2026-01-01 04:06:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:14.128163 | orchestrator | 2026-01-01 04:06:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:14.128197 | orchestrator | 2026-01-01 04:06:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:17.179732 | orchestrator | 2026-01-01 04:06:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:17.180967 | orchestrator | 2026-01-01 04:06:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:17.181099 | orchestrator | 2026-01-01 04:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:20.233252 | orchestrator | 2026-01-01 04:06:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:20.234753 | orchestrator | 2026-01-01 04:06:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:20.234784 | orchestrator | 2026-01-01 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:23.283090 | orchestrator | 2026-01-01 04:06:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:23.283863 | orchestrator | 2026-01-01 04:06:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:23.283931 | orchestrator | 2026-01-01 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:26.327392 | orchestrator | 2026-01-01 04:06:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:26.331014 | orchestrator | 2026-01-01 04:06:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:26.331048 | orchestrator | 2026-01-01 04:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:29.387186 | orchestrator | 2026-01-01 04:06:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:29.387266 | orchestrator | 2026-01-01 04:06:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:29.387276 | orchestrator | 2026-01-01 04:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:32.433238 | orchestrator | 2026-01-01 04:06:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:32.434665 | orchestrator | 2026-01-01 04:06:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:32.434703 | orchestrator | 2026-01-01 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:35.478383 | orchestrator | 2026-01-01 04:06:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:35.480467 | orchestrator | 2026-01-01 04:06:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:35.480511 | orchestrator | 2026-01-01 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:38.529903 | orchestrator | 2026-01-01 04:06:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:38.531492 | orchestrator | 2026-01-01 04:06:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:38.531538 | orchestrator | 2026-01-01 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:41.576626 | orchestrator | 2026-01-01 04:06:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:41.580476 | orchestrator | 2026-01-01 04:06:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:41.580535 | orchestrator | 2026-01-01 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:44.633238 | orchestrator | 2026-01-01 04:06:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:44.634883 | orchestrator | 2026-01-01 04:06:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:44.634920 | orchestrator | 2026-01-01 04:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:47.692104 | orchestrator | 2026-01-01 04:06:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:47.693809 | orchestrator | 2026-01-01 04:06:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:47.693928 | orchestrator | 2026-01-01 04:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:50.748325 | orchestrator | 2026-01-01 04:06:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:50.751262 | orchestrator | 2026-01-01 04:06:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:50.751290 | orchestrator | 2026-01-01 04:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:53.796456 | orchestrator | 2026-01-01 04:06:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:53.798114 | orchestrator | 2026-01-01 04:06:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:53.798364 | orchestrator | 2026-01-01 04:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:56.854298 | orchestrator | 2026-01-01 04:06:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:56.856508 | orchestrator | 2026-01-01 04:06:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:56.856550 | orchestrator | 2026-01-01 04:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:06:59.907941 | orchestrator | 2026-01-01 04:06:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:06:59.909900 | orchestrator | 2026-01-01 04:06:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:06:59.910121 | orchestrator | 2026-01-01 04:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:02.948279 | orchestrator | 2026-01-01 04:07:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:02.950176 | orchestrator | 2026-01-01 04:07:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:02.950266 | orchestrator | 2026-01-01 04:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:05.991791 | orchestrator | 2026-01-01 04:07:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:05.993284 | orchestrator | 2026-01-01 04:07:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:05.993315 | orchestrator | 2026-01-01 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:09.045262 | orchestrator | 2026-01-01 04:07:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:09.046966 | orchestrator | 2026-01-01 04:07:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:09.047011 | orchestrator | 2026-01-01 04:07:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:12.095364 | orchestrator | 2026-01-01 04:07:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:12.095945 | orchestrator | 2026-01-01 04:07:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:12.095985 | orchestrator | 2026-01-01 04:07:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:15.139612 | orchestrator | 2026-01-01 04:07:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:15.141836 | orchestrator | 2026-01-01 04:07:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:15.141886 | orchestrator | 2026-01-01 04:07:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:18.191137 | orchestrator | 2026-01-01 04:07:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:18.192691 | orchestrator | 2026-01-01 04:07:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:18.192726 | orchestrator | 2026-01-01 04:07:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:21.253205 | orchestrator | 2026-01-01 04:07:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:21.254965 | orchestrator | 2026-01-01 04:07:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:21.255031 | orchestrator | 2026-01-01 04:07:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:24.304564 | orchestrator | 2026-01-01 04:07:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:24.307235 | orchestrator | 2026-01-01 04:07:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:24.307290 | orchestrator | 2026-01-01 04:07:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:27.352542 | orchestrator | 2026-01-01 04:07:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:27.353879 | orchestrator | 2026-01-01 04:07:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:27.353900 | orchestrator | 2026-01-01 04:07:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:30.406096 | orchestrator | 2026-01-01 04:07:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:30.406830 | orchestrator | 2026-01-01 04:07:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:30.407121 | orchestrator | 2026-01-01 04:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:33.466874 | orchestrator | 2026-01-01 04:07:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:33.469265 | orchestrator | 2026-01-01 04:07:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:33.469296 | orchestrator | 2026-01-01 04:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:36.514099 | orchestrator | 2026-01-01 04:07:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:36.517450 | orchestrator | 2026-01-01 04:07:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:36.517793 | orchestrator | 2026-01-01 04:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:39.572884 | orchestrator | 2026-01-01 04:07:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:39.574074 | orchestrator | 2026-01-01 04:07:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:39.574125 | orchestrator | 2026-01-01 04:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:42.611215 | orchestrator | 2026-01-01 04:07:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:42.613083 | orchestrator | 2026-01-01 04:07:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:42.613139 | orchestrator | 2026-01-01 04:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:45.662194 | orchestrator | 2026-01-01 04:07:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:45.663856 | orchestrator | 2026-01-01 04:07:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:45.663943 | orchestrator | 2026-01-01 04:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:48.715133 | orchestrator | 2026-01-01 04:07:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:48.717626 | orchestrator | 2026-01-01 04:07:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:48.717684 | orchestrator | 2026-01-01 04:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:51.761356 | orchestrator | 2026-01-01 04:07:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:51.764345 | orchestrator | 2026-01-01 04:07:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:51.764432 | orchestrator | 2026-01-01 04:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:54.819174 | orchestrator | 2026-01-01 04:07:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:54.820626 | orchestrator | 2026-01-01 04:07:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:54.820661 | orchestrator | 2026-01-01 04:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:07:57.868808 | orchestrator | 2026-01-01 04:07:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:07:57.870596 | orchestrator | 2026-01-01 04:07:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:07:57.870801 | orchestrator | 2026-01-01 04:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:00.921282 | orchestrator | 2026-01-01 04:08:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:00.923210 | orchestrator | 2026-01-01 04:08:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:00.923263 | orchestrator | 2026-01-01 04:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:03.976823 | orchestrator | 2026-01-01 04:08:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:03.979045 | orchestrator | 2026-01-01 04:08:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:03.979127 | orchestrator | 2026-01-01 04:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:07.043984 | orchestrator | 2026-01-01 04:08:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:07.044862 | orchestrator | 2026-01-01 04:08:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:07.044898 | orchestrator | 2026-01-01 04:08:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:10.088443 | orchestrator | 2026-01-01 04:08:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:10.088989 | orchestrator | 2026-01-01 04:08:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:10.089023 | orchestrator | 2026-01-01 04:08:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:13.139107 | orchestrator | 2026-01-01 04:08:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:13.141240 | orchestrator | 2026-01-01 04:08:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:13.141279 | orchestrator | 2026-01-01 04:08:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:16.185748 | orchestrator | 2026-01-01 04:08:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:16.187959 | orchestrator | 2026-01-01 04:08:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:16.188010 | orchestrator | 2026-01-01 04:08:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:19.236306 | orchestrator | 2026-01-01 04:08:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:19.239177 | orchestrator | 2026-01-01 04:08:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:19.239317 | orchestrator | 2026-01-01 04:08:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:22.282997 | orchestrator | 2026-01-01 04:08:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:22.284641 | orchestrator | 2026-01-01 04:08:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:22.284697 | orchestrator | 2026-01-01 04:08:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:25.332006 | orchestrator | 2026-01-01 04:08:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:25.334389 | orchestrator | 2026-01-01 04:08:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:25.334422 | orchestrator | 2026-01-01 04:08:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:28.384515 | orchestrator | 2026-01-01 04:08:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:28.387738 | orchestrator | 2026-01-01 04:08:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:28.387870 | orchestrator | 2026-01-01 04:08:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:31.442766 | orchestrator | 2026-01-01 04:08:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:31.443794 | orchestrator | 2026-01-01 04:08:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:31.444048 | orchestrator | 2026-01-01 04:08:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:34.498972 | orchestrator | 2026-01-01 04:08:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:34.500253 | orchestrator | 2026-01-01 04:08:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:34.500404 | orchestrator | 2026-01-01 04:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:37.541417 | orchestrator | 2026-01-01 04:08:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:37.543914 | orchestrator | 2026-01-01 04:08:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:37.543951 | orchestrator | 2026-01-01 04:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:40.587090 | orchestrator | 2026-01-01 04:08:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:40.588766 | orchestrator | 2026-01-01 04:08:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:40.588848 | orchestrator | 2026-01-01 04:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:43.636948 | orchestrator | 2026-01-01 04:08:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:43.639253 | orchestrator | 2026-01-01 04:08:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:43.639290 | orchestrator | 2026-01-01 04:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:46.683956 | orchestrator | 2026-01-01 04:08:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:46.685032 | orchestrator | 2026-01-01 04:08:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:46.685068 | orchestrator | 2026-01-01 04:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:49.735398 | orchestrator | 2026-01-01 04:08:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:49.738765 | orchestrator | 2026-01-01 04:08:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:49.738818 | orchestrator | 2026-01-01 04:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:52.779046 | orchestrator | 2026-01-01 04:08:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:52.781276 | orchestrator | 2026-01-01 04:08:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:52.781416 | orchestrator | 2026-01-01 04:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:55.820082 | orchestrator | 2026-01-01 04:08:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:55.821189 | orchestrator | 2026-01-01 04:08:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:55.821224 | orchestrator | 2026-01-01 04:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:08:58.864262 | orchestrator | 2026-01-01 04:08:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:08:58.866718 | orchestrator | 2026-01-01 04:08:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:08:58.866760 | orchestrator | 2026-01-01 04:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:01.913757 | orchestrator | 2026-01-01 04:09:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:01.915812 | orchestrator | 2026-01-01 04:09:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:01.915859 | orchestrator | 2026-01-01 04:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:04.972134 | orchestrator | 2026-01-01 04:09:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:04.975371 | orchestrator | 2026-01-01 04:09:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:04.975408 | orchestrator | 2026-01-01 04:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:08.015987 | orchestrator | 2026-01-01 04:09:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:08.017480 | orchestrator | 2026-01-01 04:09:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:08.017514 | orchestrator | 2026-01-01 04:09:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:11.054156 | orchestrator | 2026-01-01 04:09:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:11.056234 | orchestrator | 2026-01-01 04:09:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:11.056272 | orchestrator | 2026-01-01 04:09:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:14.094692 | orchestrator | 2026-01-01 04:09:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:14.095457 | orchestrator | 2026-01-01 04:09:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:14.095493 | orchestrator | 2026-01-01 04:09:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:17.144864 | orchestrator | 2026-01-01 04:09:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:17.146844 | orchestrator | 2026-01-01 04:09:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:17.146872 | orchestrator | 2026-01-01 04:09:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:20.194229 | orchestrator | 2026-01-01 04:09:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:20.196741 | orchestrator | 2026-01-01 04:09:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:20.197190 | orchestrator | 2026-01-01 04:09:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:23.251146 | orchestrator | 2026-01-01 04:09:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:23.253779 | orchestrator | 2026-01-01 04:09:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:23.253855 | orchestrator | 2026-01-01 04:09:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:26.297868 | orchestrator | 2026-01-01 04:09:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:26.300492 | orchestrator | 2026-01-01 04:09:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:26.300528 | orchestrator | 2026-01-01 04:09:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:29.343734 | orchestrator | 2026-01-01 04:09:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:29.345980 | orchestrator | 2026-01-01 04:09:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:29.346011 | orchestrator | 2026-01-01 04:09:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:32.390816 | orchestrator | 2026-01-01 04:09:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:32.392047 | orchestrator | 2026-01-01 04:09:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:32.392079 | orchestrator | 2026-01-01 04:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:35.441250 | orchestrator | 2026-01-01 04:09:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:35.441871 | orchestrator | 2026-01-01 04:09:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:35.441903 | orchestrator | 2026-01-01 04:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:38.483498 | orchestrator | 2026-01-01 04:09:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:38.485518 | orchestrator | 2026-01-01 04:09:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:38.485555 | orchestrator | 2026-01-01 04:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:41.529863 | orchestrator | 2026-01-01 04:09:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:41.530897 | orchestrator | 2026-01-01 04:09:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:41.530931 | orchestrator | 2026-01-01 04:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:44.574758 | orchestrator | 2026-01-01 04:09:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:44.576472 | orchestrator | 2026-01-01 04:09:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:44.576513 | orchestrator | 2026-01-01 04:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:47.624199 | orchestrator | 2026-01-01 04:09:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:47.627671 | orchestrator | 2026-01-01 04:09:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:47.628039 | orchestrator | 2026-01-01 04:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:50.679752 | orchestrator | 2026-01-01 04:09:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:50.682338 | orchestrator | 2026-01-01 04:09:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:50.682376 | orchestrator | 2026-01-01 04:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:53.739948 | orchestrator | 2026-01-01 04:09:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:53.741050 | orchestrator | 2026-01-01 04:09:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:53.741086 | orchestrator | 2026-01-01 04:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:56.788581 | orchestrator | 2026-01-01 04:09:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:56.790904 | orchestrator | 2026-01-01 04:09:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:56.791078 | orchestrator | 2026-01-01 04:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:09:59.839755 | orchestrator | 2026-01-01 04:09:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:09:59.841922 | orchestrator | 2026-01-01 04:09:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:09:59.841955 | orchestrator | 2026-01-01 04:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:02.886596 | orchestrator | 2026-01-01 04:10:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:02.888560 | orchestrator | 2026-01-01 04:10:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:02.888610 | orchestrator | 2026-01-01 04:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:05.945107 | orchestrator | 2026-01-01 04:10:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:05.946374 | orchestrator | 2026-01-01 04:10:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:05.946423 | orchestrator | 2026-01-01 04:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:08.988237 | orchestrator | 2026-01-01 04:10:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:08.990675 | orchestrator | 2026-01-01 04:10:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:08.990736 | orchestrator | 2026-01-01 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:12.035370 | orchestrator | 2026-01-01 04:10:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:12.036395 | orchestrator | 2026-01-01 04:10:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:12.036612 | orchestrator | 2026-01-01 04:10:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:15.079781 | orchestrator | 2026-01-01 04:10:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:15.081404 | orchestrator | 2026-01-01 04:10:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:15.081429 | orchestrator | 2026-01-01 04:10:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:18.122661 | orchestrator | 2026-01-01 04:10:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:18.125634 | orchestrator | 2026-01-01 04:10:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:18.125697 | orchestrator | 2026-01-01 04:10:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:21.171765 | orchestrator | 2026-01-01 04:10:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:21.172710 | orchestrator | 2026-01-01 04:10:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:21.172838 | orchestrator | 2026-01-01 04:10:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:24.210732 | orchestrator | 2026-01-01 04:10:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:24.212465 | orchestrator | 2026-01-01 04:10:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:24.212519 | orchestrator | 2026-01-01 04:10:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:27.256542 | orchestrator | 2026-01-01 04:10:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:27.259142 | orchestrator | 2026-01-01 04:10:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:27.259387 | orchestrator | 2026-01-01 04:10:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:30.301741 | orchestrator | 2026-01-01 04:10:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:30.304679 | orchestrator | 2026-01-01 04:10:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:30.305202 | orchestrator | 2026-01-01 04:10:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:33.360544 | orchestrator | 2026-01-01 04:10:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:33.362531 | orchestrator | 2026-01-01 04:10:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:33.362748 | orchestrator | 2026-01-01 04:10:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:36.404371 | orchestrator | 2026-01-01 04:10:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:36.408096 | orchestrator | 2026-01-01 04:10:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:36.408128 | orchestrator | 2026-01-01 04:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:39.457870 | orchestrator | 2026-01-01 04:10:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:39.458746 | orchestrator | 2026-01-01 04:10:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:39.459264 | orchestrator | 2026-01-01 04:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:42.507634 | orchestrator | 2026-01-01 04:10:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:42.509876 | orchestrator | 2026-01-01 04:10:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:42.510229 | orchestrator | 2026-01-01 04:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:45.552554 | orchestrator | 2026-01-01 04:10:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:45.553806 | orchestrator | 2026-01-01 04:10:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:45.553845 | orchestrator | 2026-01-01 04:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:48.598470 | orchestrator | 2026-01-01 04:10:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:48.600188 | orchestrator | 2026-01-01 04:10:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:48.600380 | orchestrator | 2026-01-01 04:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:51.645150 | orchestrator | 2026-01-01 04:10:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:51.646138 | orchestrator | 2026-01-01 04:10:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:51.646214 | orchestrator | 2026-01-01 04:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:54.694169 | orchestrator | 2026-01-01 04:10:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:54.698652 | orchestrator | 2026-01-01 04:10:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:54.699266 | orchestrator | 2026-01-01 04:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:10:57.747262 | orchestrator | 2026-01-01 04:10:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:10:57.749785 | orchestrator | 2026-01-01 04:10:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:10:57.749857 | orchestrator | 2026-01-01 04:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:00.793483 | orchestrator | 2026-01-01 04:11:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:00.795831 | orchestrator | 2026-01-01 04:11:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:00.795920 | orchestrator | 2026-01-01 04:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:03.839218 | orchestrator | 2026-01-01 04:11:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:03.841180 | orchestrator | 2026-01-01 04:11:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:03.841237 | orchestrator | 2026-01-01 04:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:06.887788 | orchestrator | 2026-01-01 04:11:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:06.889561 | orchestrator | 2026-01-01 04:11:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:06.889609 | orchestrator | 2026-01-01 04:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:09.938376 | orchestrator | 2026-01-01 04:11:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:09.941358 | orchestrator | 2026-01-01 04:11:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:09.941404 | orchestrator | 2026-01-01 04:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:12.984375 | orchestrator | 2026-01-01 04:11:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:12.984924 | orchestrator | 2026-01-01 04:11:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:12.984956 | orchestrator | 2026-01-01 04:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:16.042427 | orchestrator | 2026-01-01 04:11:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:16.043963 | orchestrator | 2026-01-01 04:11:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:16.044283 | orchestrator | 2026-01-01 04:11:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:19.092760 | orchestrator | 2026-01-01 04:11:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:19.095513 | orchestrator | 2026-01-01 04:11:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:19.095532 | orchestrator | 2026-01-01 04:11:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:22.125942 | orchestrator | 2026-01-01 04:11:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:22.126648 | orchestrator | 2026-01-01 04:11:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:22.126682 | orchestrator | 2026-01-01 04:11:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:25.167197 | orchestrator | 2026-01-01 04:11:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:25.168915 | orchestrator | 2026-01-01 04:11:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:25.168935 | orchestrator | 2026-01-01 04:11:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:28.215629 | orchestrator | 2026-01-01 04:11:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:28.216404 | orchestrator | 2026-01-01 04:11:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:28.216443 | orchestrator | 2026-01-01 04:11:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:31.260358 | orchestrator | 2026-01-01 04:11:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:31.261336 | orchestrator | 2026-01-01 04:11:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:31.261367 | orchestrator | 2026-01-01 04:11:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:34.314650 | orchestrator | 2026-01-01 04:11:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:34.317231 | orchestrator | 2026-01-01 04:11:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:34.317273 | orchestrator | 2026-01-01 04:11:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:37.370387 | orchestrator | 2026-01-01 04:11:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:37.373663 | orchestrator | 2026-01-01 04:11:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:37.373688 | orchestrator | 2026-01-01 04:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:40.423541 | orchestrator | 2026-01-01 04:11:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:40.424413 | orchestrator | 2026-01-01 04:11:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:40.424652 | orchestrator | 2026-01-01 04:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:43.481387 | orchestrator | 2026-01-01 04:11:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:43.484782 | orchestrator | 2026-01-01 04:11:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:43.485135 | orchestrator | 2026-01-01 04:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:46.537047 | orchestrator | 2026-01-01 04:11:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:46.538283 | orchestrator | 2026-01-01 04:11:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:46.538391 | orchestrator | 2026-01-01 04:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:49.586361 | orchestrator | 2026-01-01 04:11:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:49.590702 | orchestrator | 2026-01-01 04:11:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:49.590732 | orchestrator | 2026-01-01 04:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:52.644415 | orchestrator | 2026-01-01 04:11:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:52.646182 | orchestrator | 2026-01-01 04:11:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:52.646452 | orchestrator | 2026-01-01 04:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:55.698955 | orchestrator | 2026-01-01 04:11:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:55.701270 | orchestrator | 2026-01-01 04:11:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:55.701446 | orchestrator | 2026-01-01 04:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:11:58.748373 | orchestrator | 2026-01-01 04:11:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:11:58.748899 | orchestrator | 2026-01-01 04:11:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:11:58.748932 | orchestrator | 2026-01-01 04:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:01.798528 | orchestrator | 2026-01-01 04:12:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:01.799564 | orchestrator | 2026-01-01 04:12:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:01.799595 | orchestrator | 2026-01-01 04:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:04.846369 | orchestrator | 2026-01-01 04:12:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:04.847787 | orchestrator | 2026-01-01 04:12:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:04.847821 | orchestrator | 2026-01-01 04:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:07.896469 | orchestrator | 2026-01-01 04:12:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:07.899843 | orchestrator | 2026-01-01 04:12:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:07.899888 | orchestrator | 2026-01-01 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:10.945165 | orchestrator | 2026-01-01 04:12:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:10.946645 | orchestrator | 2026-01-01 04:12:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:10.946678 | orchestrator | 2026-01-01 04:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:13.996315 | orchestrator | 2026-01-01 04:12:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:13.997979 | orchestrator | 2026-01-01 04:12:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:13.998006 | orchestrator | 2026-01-01 04:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:17.046610 | orchestrator | 2026-01-01 04:12:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:17.047778 | orchestrator | 2026-01-01 04:12:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:17.047870 | orchestrator | 2026-01-01 04:12:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:20.103622 | orchestrator | 2026-01-01 04:12:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:20.104506 | orchestrator | 2026-01-01 04:12:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:20.104550 | orchestrator | 2026-01-01 04:12:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:23.141007 | orchestrator | 2026-01-01 04:12:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:23.141491 | orchestrator | 2026-01-01 04:12:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:23.141536 | orchestrator | 2026-01-01 04:12:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:26.186640 | orchestrator | 2026-01-01 04:12:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:26.189900 | orchestrator | 2026-01-01 04:12:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:26.190013 | orchestrator | 2026-01-01 04:12:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:29.237486 | orchestrator | 2026-01-01 04:12:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:29.238599 | orchestrator | 2026-01-01 04:12:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:29.238922 | orchestrator | 2026-01-01 04:12:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:32.298334 | orchestrator | 2026-01-01 04:12:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:32.299921 | orchestrator | 2026-01-01 04:12:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:32.299983 | orchestrator | 2026-01-01 04:12:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:35.345122 | orchestrator | 2026-01-01 04:12:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:35.345231 | orchestrator | 2026-01-01 04:12:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:35.345248 | orchestrator | 2026-01-01 04:12:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:38.384349 | orchestrator | 2026-01-01 04:12:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:38.386307 | orchestrator | 2026-01-01 04:12:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:38.386421 | orchestrator | 2026-01-01 04:12:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:41.440623 | orchestrator | 2026-01-01 04:12:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:41.442422 | orchestrator | 2026-01-01 04:12:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:41.442472 | orchestrator | 2026-01-01 04:12:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:44.495643 | orchestrator | 2026-01-01 04:12:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:44.496808 | orchestrator | 2026-01-01 04:12:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:44.496839 | orchestrator | 2026-01-01 04:12:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:47.542431 | orchestrator | 2026-01-01 04:12:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:47.542679 | orchestrator | 2026-01-01 04:12:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:47.542707 | orchestrator | 2026-01-01 04:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:50.581965 | orchestrator | 2026-01-01 04:12:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:50.582611 | orchestrator | 2026-01-01 04:12:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:50.582707 | orchestrator | 2026-01-01 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:53.629394 | orchestrator | 2026-01-01 04:12:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:53.630377 | orchestrator | 2026-01-01 04:12:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:53.630408 | orchestrator | 2026-01-01 04:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:56.677844 | orchestrator | 2026-01-01 04:12:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:56.678601 | orchestrator | 2026-01-01 04:12:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:56.678685 | orchestrator | 2026-01-01 04:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:12:59.734894 | orchestrator | 2026-01-01 04:12:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:12:59.738163 | orchestrator | 2026-01-01 04:12:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:12:59.738201 | orchestrator | 2026-01-01 04:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:02.794461 | orchestrator | 2026-01-01 04:13:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:02.795580 | orchestrator | 2026-01-01 04:13:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:02.795613 | orchestrator | 2026-01-01 04:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:05.851202 | orchestrator | 2026-01-01 04:13:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:05.852412 | orchestrator | 2026-01-01 04:13:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:05.852445 | orchestrator | 2026-01-01 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:08.906882 | orchestrator | 2026-01-01 04:13:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:08.908539 | orchestrator | 2026-01-01 04:13:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:08.908577 | orchestrator | 2026-01-01 04:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:11.961532 | orchestrator | 2026-01-01 04:13:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:11.962820 | orchestrator | 2026-01-01 04:13:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:11.962851 | orchestrator | 2026-01-01 04:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:15.013131 | orchestrator | 2026-01-01 04:13:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:15.015169 | orchestrator | 2026-01-01 04:13:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:15.015201 | orchestrator | 2026-01-01 04:13:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:18.068256 | orchestrator | 2026-01-01 04:13:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:18.070318 | orchestrator | 2026-01-01 04:13:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:18.070358 | orchestrator | 2026-01-01 04:13:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:21.116003 | orchestrator | 2026-01-01 04:13:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:21.117182 | orchestrator | 2026-01-01 04:13:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:21.117297 | orchestrator | 2026-01-01 04:13:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:24.153008 | orchestrator | 2026-01-01 04:13:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:24.153906 | orchestrator | 2026-01-01 04:13:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:24.153984 | orchestrator | 2026-01-01 04:13:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:27.202146 | orchestrator | 2026-01-01 04:13:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:27.204185 | orchestrator | 2026-01-01 04:13:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:27.204217 | orchestrator | 2026-01-01 04:13:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:30.254838 | orchestrator | 2026-01-01 04:13:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:30.255953 | orchestrator | 2026-01-01 04:13:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:30.255987 | orchestrator | 2026-01-01 04:13:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:33.304850 | orchestrator | 2026-01-01 04:13:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:33.306554 | orchestrator | 2026-01-01 04:13:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:33.306608 | orchestrator | 2026-01-01 04:13:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:36.353762 | orchestrator | 2026-01-01 04:13:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:36.356423 | orchestrator | 2026-01-01 04:13:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:36.356778 | orchestrator | 2026-01-01 04:13:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:39.409691 | orchestrator | 2026-01-01 04:13:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:39.411098 | orchestrator | 2026-01-01 04:13:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:39.411135 | orchestrator | 2026-01-01 04:13:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:42.462590 | orchestrator | 2026-01-01 04:13:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:42.464046 | orchestrator | 2026-01-01 04:13:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:42.464105 | orchestrator | 2026-01-01 04:13:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:45.514185 | orchestrator | 2026-01-01 04:13:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:45.515014 | orchestrator | 2026-01-01 04:13:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:45.515082 | orchestrator | 2026-01-01 04:13:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:48.574001 | orchestrator | 2026-01-01 04:13:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:48.576226 | orchestrator | 2026-01-01 04:13:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:48.576286 | orchestrator | 2026-01-01 04:13:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:51.624583 | orchestrator | 2026-01-01 04:13:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:51.625855 | orchestrator | 2026-01-01 04:13:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:51.626213 | orchestrator | 2026-01-01 04:13:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:54.680096 | orchestrator | 2026-01-01 04:13:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:54.683184 | orchestrator | 2026-01-01 04:13:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:54.683225 | orchestrator | 2026-01-01 04:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:13:57.732463 | orchestrator | 2026-01-01 04:13:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:13:57.736170 | orchestrator | 2026-01-01 04:13:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:13:57.736230 | orchestrator | 2026-01-01 04:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:00.787011 | orchestrator | 2026-01-01 04:14:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:00.788542 | orchestrator | 2026-01-01 04:14:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:00.788585 | orchestrator | 2026-01-01 04:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:03.837608 | orchestrator | 2026-01-01 04:14:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:03.838935 | orchestrator | 2026-01-01 04:14:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:03.839012 | orchestrator | 2026-01-01 04:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:06.889726 | orchestrator | 2026-01-01 04:14:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:06.890472 | orchestrator | 2026-01-01 04:14:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:06.890891 | orchestrator | 2026-01-01 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:09.948787 | orchestrator | 2026-01-01 04:14:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:09.951388 | orchestrator | 2026-01-01 04:14:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:09.951424 | orchestrator | 2026-01-01 04:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:12.997691 | orchestrator | 2026-01-01 04:14:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:13.000287 | orchestrator | 2026-01-01 04:14:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:13.000665 | orchestrator | 2026-01-01 04:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:16.055551 | orchestrator | 2026-01-01 04:14:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:16.060874 | orchestrator | 2026-01-01 04:14:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:16.060947 | orchestrator | 2026-01-01 04:14:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:19.109578 | orchestrator | 2026-01-01 04:14:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:19.111721 | orchestrator | 2026-01-01 04:14:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:19.111763 | orchestrator | 2026-01-01 04:14:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:22.155060 | orchestrator | 2026-01-01 04:14:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:22.155931 | orchestrator | 2026-01-01 04:14:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:22.155963 | orchestrator | 2026-01-01 04:14:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:25.205680 | orchestrator | 2026-01-01 04:14:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:25.207409 | orchestrator | 2026-01-01 04:14:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:25.207444 | orchestrator | 2026-01-01 04:14:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:28.249223 | orchestrator | 2026-01-01 04:14:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:28.249929 | orchestrator | 2026-01-01 04:14:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:28.249963 | orchestrator | 2026-01-01 04:14:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:31.288213 | orchestrator | 2026-01-01 04:14:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:31.290824 | orchestrator | 2026-01-01 04:14:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:31.290875 | orchestrator | 2026-01-01 04:14:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:34.340444 | orchestrator | 2026-01-01 04:14:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:34.341567 | orchestrator | 2026-01-01 04:14:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:34.341657 | orchestrator | 2026-01-01 04:14:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:37.388378 | orchestrator | 2026-01-01 04:14:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:37.389718 | orchestrator | 2026-01-01 04:14:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:37.389750 | orchestrator | 2026-01-01 04:14:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:40.433765 | orchestrator | 2026-01-01 04:14:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:40.434811 | orchestrator | 2026-01-01 04:14:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:40.434889 | orchestrator | 2026-01-01 04:14:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:43.483979 | orchestrator | 2026-01-01 04:14:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:43.486090 | orchestrator | 2026-01-01 04:14:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:43.486128 | orchestrator | 2026-01-01 04:14:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:46.533929 | orchestrator | 2026-01-01 04:14:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:46.537978 | orchestrator | 2026-01-01 04:14:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:46.538491 | orchestrator | 2026-01-01 04:14:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:49.589597 | orchestrator | 2026-01-01 04:14:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:49.590804 | orchestrator | 2026-01-01 04:14:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:49.591189 | orchestrator | 2026-01-01 04:14:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:52.642933 | orchestrator | 2026-01-01 04:14:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:52.646358 | orchestrator | 2026-01-01 04:14:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:52.646508 | orchestrator | 2026-01-01 04:14:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:55.697873 | orchestrator | 2026-01-01 04:14:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:55.700625 | orchestrator | 2026-01-01 04:14:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:55.700675 | orchestrator | 2026-01-01 04:14:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:14:58.750570 | orchestrator | 2026-01-01 04:14:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:14:58.751957 | orchestrator | 2026-01-01 04:14:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:14:58.752191 | orchestrator | 2026-01-01 04:14:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:01.814992 | orchestrator | 2026-01-01 04:15:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:01.819414 | orchestrator | 2026-01-01 04:15:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:01.819676 | orchestrator | 2026-01-01 04:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:04.872714 | orchestrator | 2026-01-01 04:15:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:04.875911 | orchestrator | 2026-01-01 04:15:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:04.876236 | orchestrator | 2026-01-01 04:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:07.928990 | orchestrator | 2026-01-01 04:15:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:07.932194 | orchestrator | 2026-01-01 04:15:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:07.932335 | orchestrator | 2026-01-01 04:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:10.981323 | orchestrator | 2026-01-01 04:15:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:10.983284 | orchestrator | 2026-01-01 04:15:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:10.983465 | orchestrator | 2026-01-01 04:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:14.030006 | orchestrator | 2026-01-01 04:15:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:14.030847 | orchestrator | 2026-01-01 04:15:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:14.030909 | orchestrator | 2026-01-01 04:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:17.082416 | orchestrator | 2026-01-01 04:15:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:17.082684 | orchestrator | 2026-01-01 04:15:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:17.082712 | orchestrator | 2026-01-01 04:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:20.126398 | orchestrator | 2026-01-01 04:15:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:20.127402 | orchestrator | 2026-01-01 04:15:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:20.127436 | orchestrator | 2026-01-01 04:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:23.163005 | orchestrator | 2026-01-01 04:15:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:23.163964 | orchestrator | 2026-01-01 04:15:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:23.164015 | orchestrator | 2026-01-01 04:15:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:26.207325 | orchestrator | 2026-01-01 04:15:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:26.209024 | orchestrator | 2026-01-01 04:15:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:26.209048 | orchestrator | 2026-01-01 04:15:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:29.259849 | orchestrator | 2026-01-01 04:15:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:29.263794 | orchestrator | 2026-01-01 04:15:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:29.264038 | orchestrator | 2026-01-01 04:15:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:32.312734 | orchestrator | 2026-01-01 04:15:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:32.314723 | orchestrator | 2026-01-01 04:15:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:32.314787 | orchestrator | 2026-01-01 04:15:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:35.366214 | orchestrator | 2026-01-01 04:15:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:35.366342 | orchestrator | 2026-01-01 04:15:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:35.366354 | orchestrator | 2026-01-01 04:15:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:38.414413 | orchestrator | 2026-01-01 04:15:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:38.416466 | orchestrator | 2026-01-01 04:15:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:38.416498 | orchestrator | 2026-01-01 04:15:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:41.465310 | orchestrator | 2026-01-01 04:15:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:41.467088 | orchestrator | 2026-01-01 04:15:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:41.467242 | orchestrator | 2026-01-01 04:15:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:44.511633 | orchestrator | 2026-01-01 04:15:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:44.513914 | orchestrator | 2026-01-01 04:15:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:44.513959 | orchestrator | 2026-01-01 04:15:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:47.557379 | orchestrator | 2026-01-01 04:15:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:47.558650 | orchestrator | 2026-01-01 04:15:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:47.558694 | orchestrator | 2026-01-01 04:15:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:50.614443 | orchestrator | 2026-01-01 04:15:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:50.617599 | orchestrator | 2026-01-01 04:15:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:50.617750 | orchestrator | 2026-01-01 04:15:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:53.667391 | orchestrator | 2026-01-01 04:15:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:53.669484 | orchestrator | 2026-01-01 04:15:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:53.669515 | orchestrator | 2026-01-01 04:15:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:56.711215 | orchestrator | 2026-01-01 04:15:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:56.713186 | orchestrator | 2026-01-01 04:15:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:56.713241 | orchestrator | 2026-01-01 04:15:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:15:59.764520 | orchestrator | 2026-01-01 04:15:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:15:59.765947 | orchestrator | 2026-01-01 04:15:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:15:59.765976 | orchestrator | 2026-01-01 04:15:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:02.806618 | orchestrator | 2026-01-01 04:16:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:02.807923 | orchestrator | 2026-01-01 04:16:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:02.807936 | orchestrator | 2026-01-01 04:16:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:05.852968 | orchestrator | 2026-01-01 04:16:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:05.855642 | orchestrator | 2026-01-01 04:16:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:05.855699 | orchestrator | 2026-01-01 04:16:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:08.899142 | orchestrator | 2026-01-01 04:16:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:08.900112 | orchestrator | 2026-01-01 04:16:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:08.900126 | orchestrator | 2026-01-01 04:16:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:11.954359 | orchestrator | 2026-01-01 04:16:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:11.955186 | orchestrator | 2026-01-01 04:16:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:11.955828 | orchestrator | 2026-01-01 04:16:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:15.005125 | orchestrator | 2026-01-01 04:16:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:15.005911 | orchestrator | 2026-01-01 04:16:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:15.006104 | orchestrator | 2026-01-01 04:16:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:18.063910 | orchestrator | 2026-01-01 04:16:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:18.066583 | orchestrator | 2026-01-01 04:16:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:18.066633 | orchestrator | 2026-01-01 04:16:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:21.119337 | orchestrator | 2026-01-01 04:16:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:21.122661 | orchestrator | 2026-01-01 04:16:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:21.122910 | orchestrator | 2026-01-01 04:16:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:24.162615 | orchestrator | 2026-01-01 04:16:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:24.165665 | orchestrator | 2026-01-01 04:16:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:24.165679 | orchestrator | 2026-01-01 04:16:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:27.215974 | orchestrator | 2026-01-01 04:16:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:27.220928 | orchestrator | 2026-01-01 04:16:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:27.221083 | orchestrator | 2026-01-01 04:16:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:30.273697 | orchestrator | 2026-01-01 04:16:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:30.275464 | orchestrator | 2026-01-01 04:16:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:30.275498 | orchestrator | 2026-01-01 04:16:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:33.325385 | orchestrator | 2026-01-01 04:16:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:33.326608 | orchestrator | 2026-01-01 04:16:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:33.326646 | orchestrator | 2026-01-01 04:16:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:36.374353 | orchestrator | 2026-01-01 04:16:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:36.375766 | orchestrator | 2026-01-01 04:16:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:36.375866 | orchestrator | 2026-01-01 04:16:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:39.421165 | orchestrator | 2026-01-01 04:16:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:39.424527 | orchestrator | 2026-01-01 04:16:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:39.424776 | orchestrator | 2026-01-01 04:16:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:42.475135 | orchestrator | 2026-01-01 04:16:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:42.479842 | orchestrator | 2026-01-01 04:16:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:42.479955 | orchestrator | 2026-01-01 04:16:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:45.527529 | orchestrator | 2026-01-01 04:16:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:45.529291 | orchestrator | 2026-01-01 04:16:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:45.529392 | orchestrator | 2026-01-01 04:16:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:48.581321 | orchestrator | 2026-01-01 04:16:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:48.583148 | orchestrator | 2026-01-01 04:16:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:48.583663 | orchestrator | 2026-01-01 04:16:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:51.644866 | orchestrator | 2026-01-01 04:16:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:51.645995 | orchestrator | 2026-01-01 04:16:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:51.646073 | orchestrator | 2026-01-01 04:16:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:54.691548 | orchestrator | 2026-01-01 04:16:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:54.695893 | orchestrator | 2026-01-01 04:16:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:54.695948 | orchestrator | 2026-01-01 04:16:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:16:57.740561 | orchestrator | 2026-01-01 04:16:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:16:57.741990 | orchestrator | 2026-01-01 04:16:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:16:57.742062 | orchestrator | 2026-01-01 04:16:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:00.799122 | orchestrator | 2026-01-01 04:17:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:00.802595 | orchestrator | 2026-01-01 04:17:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:00.802629 | orchestrator | 2026-01-01 04:17:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:03.856803 | orchestrator | 2026-01-01 04:17:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:03.858399 | orchestrator | 2026-01-01 04:17:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:03.858503 | orchestrator | 2026-01-01 04:17:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:06.911871 | orchestrator | 2026-01-01 04:17:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:06.913337 | orchestrator | 2026-01-01 04:17:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:06.913374 | orchestrator | 2026-01-01 04:17:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:09.957864 | orchestrator | 2026-01-01 04:17:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:09.958897 | orchestrator | 2026-01-01 04:17:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:09.958977 | orchestrator | 2026-01-01 04:17:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:13.009731 | orchestrator | 2026-01-01 04:17:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:13.010105 | orchestrator | 2026-01-01 04:17:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:13.010138 | orchestrator | 2026-01-01 04:17:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:16.057737 | orchestrator | 2026-01-01 04:17:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:16.058214 | orchestrator | 2026-01-01 04:17:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:16.058492 | orchestrator | 2026-01-01 04:17:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:19.113089 | orchestrator | 2026-01-01 04:17:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:19.115697 | orchestrator | 2026-01-01 04:17:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:19.115745 | orchestrator | 2026-01-01 04:17:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:22.167545 | orchestrator | 2026-01-01 04:17:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:22.170312 | orchestrator | 2026-01-01 04:17:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:22.170399 | orchestrator | 2026-01-01 04:17:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:25.217700 | orchestrator | 2026-01-01 04:17:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:25.218501 | orchestrator | 2026-01-01 04:17:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:25.218587 | orchestrator | 2026-01-01 04:17:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:28.270320 | orchestrator | 2026-01-01 04:17:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:28.272656 | orchestrator | 2026-01-01 04:17:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:28.272703 | orchestrator | 2026-01-01 04:17:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:31.316106 | orchestrator | 2026-01-01 04:17:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:31.317846 | orchestrator | 2026-01-01 04:17:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:31.317871 | orchestrator | 2026-01-01 04:17:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:34.365765 | orchestrator | 2026-01-01 04:17:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:34.366538 | orchestrator | 2026-01-01 04:17:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:34.366569 | orchestrator | 2026-01-01 04:17:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:37.422176 | orchestrator | 2026-01-01 04:17:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:37.426203 | orchestrator | 2026-01-01 04:17:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:37.426307 | orchestrator | 2026-01-01 04:17:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:40.475435 | orchestrator | 2026-01-01 04:17:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:40.477236 | orchestrator | 2026-01-01 04:17:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:40.477302 | orchestrator | 2026-01-01 04:17:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:43.523935 | orchestrator | 2026-01-01 04:17:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:43.524175 | orchestrator | 2026-01-01 04:17:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:43.524221 | orchestrator | 2026-01-01 04:17:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:46.576315 | orchestrator | 2026-01-01 04:17:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:46.578627 | orchestrator | 2026-01-01 04:17:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:46.578649 | orchestrator | 2026-01-01 04:17:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:49.617612 | orchestrator | 2026-01-01 04:17:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:49.619391 | orchestrator | 2026-01-01 04:17:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:49.619428 | orchestrator | 2026-01-01 04:17:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:52.671609 | orchestrator | 2026-01-01 04:17:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:52.674374 | orchestrator | 2026-01-01 04:17:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:52.674412 | orchestrator | 2026-01-01 04:17:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:55.727615 | orchestrator | 2026-01-01 04:17:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:55.730999 | orchestrator | 2026-01-01 04:17:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:55.731081 | orchestrator | 2026-01-01 04:17:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:17:58.783894 | orchestrator | 2026-01-01 04:17:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:17:58.784238 | orchestrator | 2026-01-01 04:17:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:17:58.784380 | orchestrator | 2026-01-01 04:17:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:01.831716 | orchestrator | 2026-01-01 04:18:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:01.836183 | orchestrator | 2026-01-01 04:18:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:01.836226 | orchestrator | 2026-01-01 04:18:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:04.882784 | orchestrator | 2026-01-01 04:18:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:04.884211 | orchestrator | 2026-01-01 04:18:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:04.884238 | orchestrator | 2026-01-01 04:18:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:07.939969 | orchestrator | 2026-01-01 04:18:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:07.941775 | orchestrator | 2026-01-01 04:18:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:07.941806 | orchestrator | 2026-01-01 04:18:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:10.985729 | orchestrator | 2026-01-01 04:18:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:10.987070 | orchestrator | 2026-01-01 04:18:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:10.987216 | orchestrator | 2026-01-01 04:18:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:14.034349 | orchestrator | 2026-01-01 04:18:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:14.036085 | orchestrator | 2026-01-01 04:18:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:14.036112 | orchestrator | 2026-01-01 04:18:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:17.080572 | orchestrator | 2026-01-01 04:18:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:17.080993 | orchestrator | 2026-01-01 04:18:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:17.081028 | orchestrator | 2026-01-01 04:18:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:20.130594 | orchestrator | 2026-01-01 04:18:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:20.131819 | orchestrator | 2026-01-01 04:18:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:20.131914 | orchestrator | 2026-01-01 04:18:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:23.180052 | orchestrator | 2026-01-01 04:18:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:23.181850 | orchestrator | 2026-01-01 04:18:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:23.181881 | orchestrator | 2026-01-01 04:18:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:26.224440 | orchestrator | 2026-01-01 04:18:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:26.226374 | orchestrator | 2026-01-01 04:18:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:26.226405 | orchestrator | 2026-01-01 04:18:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:29.264793 | orchestrator | 2026-01-01 04:18:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:29.266817 | orchestrator | 2026-01-01 04:18:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:29.266910 | orchestrator | 2026-01-01 04:18:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:32.321157 | orchestrator | 2026-01-01 04:18:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:32.323087 | orchestrator | 2026-01-01 04:18:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:32.323142 | orchestrator | 2026-01-01 04:18:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:35.370471 | orchestrator | 2026-01-01 04:18:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:35.371395 | orchestrator | 2026-01-01 04:18:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:35.371438 | orchestrator | 2026-01-01 04:18:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:38.428006 | orchestrator | 2026-01-01 04:18:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:38.430444 | orchestrator | 2026-01-01 04:18:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:38.430471 | orchestrator | 2026-01-01 04:18:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:41.484762 | orchestrator | 2026-01-01 04:18:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:41.487596 | orchestrator | 2026-01-01 04:18:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:41.487675 | orchestrator | 2026-01-01 04:18:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:44.533131 | orchestrator | 2026-01-01 04:18:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:44.533244 | orchestrator | 2026-01-01 04:18:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:44.533312 | orchestrator | 2026-01-01 04:18:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:47.580932 | orchestrator | 2026-01-01 04:18:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:47.582646 | orchestrator | 2026-01-01 04:18:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:47.582690 | orchestrator | 2026-01-01 04:18:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:50.636930 | orchestrator | 2026-01-01 04:18:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:50.639325 | orchestrator | 2026-01-01 04:18:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:50.639358 | orchestrator | 2026-01-01 04:18:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:53.690735 | orchestrator | 2026-01-01 04:18:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:53.692232 | orchestrator | 2026-01-01 04:18:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:53.692754 | orchestrator | 2026-01-01 04:18:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:56.745245 | orchestrator | 2026-01-01 04:18:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:56.746703 | orchestrator | 2026-01-01 04:18:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:56.746731 | orchestrator | 2026-01-01 04:18:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:18:59.790654 | orchestrator | 2026-01-01 04:18:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:18:59.791086 | orchestrator | 2026-01-01 04:18:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:18:59.791601 | orchestrator | 2026-01-01 04:18:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:02.836987 | orchestrator | 2026-01-01 04:19:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:02.838697 | orchestrator | 2026-01-01 04:19:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:02.838811 | orchestrator | 2026-01-01 04:19:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:05.887755 | orchestrator | 2026-01-01 04:19:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:05.889579 | orchestrator | 2026-01-01 04:19:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:05.889631 | orchestrator | 2026-01-01 04:19:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:08.934725 | orchestrator | 2026-01-01 04:19:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:08.936657 | orchestrator | 2026-01-01 04:19:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:08.936808 | orchestrator | 2026-01-01 04:19:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:11.993724 | orchestrator | 2026-01-01 04:19:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:11.995579 | orchestrator | 2026-01-01 04:19:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:11.995612 | orchestrator | 2026-01-01 04:19:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:15.041911 | orchestrator | 2026-01-01 04:19:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:15.042564 | orchestrator | 2026-01-01 04:19:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:15.043007 | orchestrator | 2026-01-01 04:19:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:18.094305 | orchestrator | 2026-01-01 04:19:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:18.095842 | orchestrator | 2026-01-01 04:19:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:18.095915 | orchestrator | 2026-01-01 04:19:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:21.143452 | orchestrator | 2026-01-01 04:19:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:21.143689 | orchestrator | 2026-01-01 04:19:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:21.143726 | orchestrator | 2026-01-01 04:19:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:24.191542 | orchestrator | 2026-01-01 04:19:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:24.193176 | orchestrator | 2026-01-01 04:19:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:24.193196 | orchestrator | 2026-01-01 04:19:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:27.237714 | orchestrator | 2026-01-01 04:19:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:27.238591 | orchestrator | 2026-01-01 04:19:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:27.238624 | orchestrator | 2026-01-01 04:19:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:30.287356 | orchestrator | 2026-01-01 04:19:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:30.290986 | orchestrator | 2026-01-01 04:19:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:30.291099 | orchestrator | 2026-01-01 04:19:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:33.336969 | orchestrator | 2026-01-01 04:19:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:33.338058 | orchestrator | 2026-01-01 04:19:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:33.338099 | orchestrator | 2026-01-01 04:19:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:36.381090 | orchestrator | 2026-01-01 04:19:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:36.384126 | orchestrator | 2026-01-01 04:19:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:36.384234 | orchestrator | 2026-01-01 04:19:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:39.436199 | orchestrator | 2026-01-01 04:19:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:39.437781 | orchestrator | 2026-01-01 04:19:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:39.437821 | orchestrator | 2026-01-01 04:19:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:42.484601 | orchestrator | 2026-01-01 04:19:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:42.487805 | orchestrator | 2026-01-01 04:19:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:42.487885 | orchestrator | 2026-01-01 04:19:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:45.531618 | orchestrator | 2026-01-01 04:19:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:45.532581 | orchestrator | 2026-01-01 04:19:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:45.532615 | orchestrator | 2026-01-01 04:19:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:48.580058 | orchestrator | 2026-01-01 04:19:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:48.581776 | orchestrator | 2026-01-01 04:19:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:48.581810 | orchestrator | 2026-01-01 04:19:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:51.631514 | orchestrator | 2026-01-01 04:19:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:51.634572 | orchestrator | 2026-01-01 04:19:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:51.634593 | orchestrator | 2026-01-01 04:19:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:54.680602 | orchestrator | 2026-01-01 04:19:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:54.682483 | orchestrator | 2026-01-01 04:19:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:54.682706 | orchestrator | 2026-01-01 04:19:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:19:57.724145 | orchestrator | 2026-01-01 04:19:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:19:57.726223 | orchestrator | 2026-01-01 04:19:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:19:57.726332 | orchestrator | 2026-01-01 04:19:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:00.776803 | orchestrator | 2026-01-01 04:20:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:00.779588 | orchestrator | 2026-01-01 04:20:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:00.779632 | orchestrator | 2026-01-01 04:20:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:03.826196 | orchestrator | 2026-01-01 04:20:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:03.828359 | orchestrator | 2026-01-01 04:20:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:03.828433 | orchestrator | 2026-01-01 04:20:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:06.879813 | orchestrator | 2026-01-01 04:20:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:06.881908 | orchestrator | 2026-01-01 04:20:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:06.881941 | orchestrator | 2026-01-01 04:20:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:09.939642 | orchestrator | 2026-01-01 04:20:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:09.942236 | orchestrator | 2026-01-01 04:20:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:09.942309 | orchestrator | 2026-01-01 04:20:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:12.991543 | orchestrator | 2026-01-01 04:20:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:12.993424 | orchestrator | 2026-01-01 04:20:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:12.993454 | orchestrator | 2026-01-01 04:20:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:16.034932 | orchestrator | 2026-01-01 04:20:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:16.035685 | orchestrator | 2026-01-01 04:20:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:16.035725 | orchestrator | 2026-01-01 04:20:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:19.083875 | orchestrator | 2026-01-01 04:20:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:19.084867 | orchestrator | 2026-01-01 04:20:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:19.084894 | orchestrator | 2026-01-01 04:20:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:22.122832 | orchestrator | 2026-01-01 04:20:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:22.124160 | orchestrator | 2026-01-01 04:20:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:22.124186 | orchestrator | 2026-01-01 04:20:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:25.172458 | orchestrator | 2026-01-01 04:20:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:25.176674 | orchestrator | 2026-01-01 04:20:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:25.176727 | orchestrator | 2026-01-01 04:20:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:28.227466 | orchestrator | 2026-01-01 04:20:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:28.228613 | orchestrator | 2026-01-01 04:20:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:28.228642 | orchestrator | 2026-01-01 04:20:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:31.264050 | orchestrator | 2026-01-01 04:20:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:31.265448 | orchestrator | 2026-01-01 04:20:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:31.265478 | orchestrator | 2026-01-01 04:20:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:34.313070 | orchestrator | 2026-01-01 04:20:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:34.314659 | orchestrator | 2026-01-01 04:20:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:34.314710 | orchestrator | 2026-01-01 04:20:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:37.356095 | orchestrator | 2026-01-01 04:20:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:37.357480 | orchestrator | 2026-01-01 04:20:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:37.357512 | orchestrator | 2026-01-01 04:20:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:40.411895 | orchestrator | 2026-01-01 04:20:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:40.414400 | orchestrator | 2026-01-01 04:20:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:40.414433 | orchestrator | 2026-01-01 04:20:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:43.463535 | orchestrator | 2026-01-01 04:20:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:43.464783 | orchestrator | 2026-01-01 04:20:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:43.464818 | orchestrator | 2026-01-01 04:20:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:46.512931 | orchestrator | 2026-01-01 04:20:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:46.514880 | orchestrator | 2026-01-01 04:20:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:46.514906 | orchestrator | 2026-01-01 04:20:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:49.560035 | orchestrator | 2026-01-01 04:20:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:49.561592 | orchestrator | 2026-01-01 04:20:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:49.561685 | orchestrator | 2026-01-01 04:20:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:52.601791 | orchestrator | 2026-01-01 04:20:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:52.603766 | orchestrator | 2026-01-01 04:20:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:52.603826 | orchestrator | 2026-01-01 04:20:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:55.661539 | orchestrator | 2026-01-01 04:20:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:55.664036 | orchestrator | 2026-01-01 04:20:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:55.664203 | orchestrator | 2026-01-01 04:20:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:20:58.710561 | orchestrator | 2026-01-01 04:20:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:20:58.713772 | orchestrator | 2026-01-01 04:20:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:20:58.713807 | orchestrator | 2026-01-01 04:20:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:01.763037 | orchestrator | 2026-01-01 04:21:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:01.764424 | orchestrator | 2026-01-01 04:21:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:01.764466 | orchestrator | 2026-01-01 04:21:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:04.811320 | orchestrator | 2026-01-01 04:21:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:04.814198 | orchestrator | 2026-01-01 04:21:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:04.814240 | orchestrator | 2026-01-01 04:21:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:07.854165 | orchestrator | 2026-01-01 04:21:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:07.857673 | orchestrator | 2026-01-01 04:21:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:07.857848 | orchestrator | 2026-01-01 04:21:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:10.906782 | orchestrator | 2026-01-01 04:21:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:10.908584 | orchestrator | 2026-01-01 04:21:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:10.908725 | orchestrator | 2026-01-01 04:21:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:13.955248 | orchestrator | 2026-01-01 04:21:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:13.956853 | orchestrator | 2026-01-01 04:21:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:13.956934 | orchestrator | 2026-01-01 04:21:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:17.008998 | orchestrator | 2026-01-01 04:21:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:17.009827 | orchestrator | 2026-01-01 04:21:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:17.009858 | orchestrator | 2026-01-01 04:21:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:20.061151 | orchestrator | 2026-01-01 04:21:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:20.062947 | orchestrator | 2026-01-01 04:21:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:20.063000 | orchestrator | 2026-01-01 04:21:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:23.108633 | orchestrator | 2026-01-01 04:21:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:23.111314 | orchestrator | 2026-01-01 04:21:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:23.111351 | orchestrator | 2026-01-01 04:21:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:26.166437 | orchestrator | 2026-01-01 04:21:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:26.168374 | orchestrator | 2026-01-01 04:21:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:26.168438 | orchestrator | 2026-01-01 04:21:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:29.208841 | orchestrator | 2026-01-01 04:21:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:29.210946 | orchestrator | 2026-01-01 04:21:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:29.210989 | orchestrator | 2026-01-01 04:21:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:32.256630 | orchestrator | 2026-01-01 04:21:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:32.257570 | orchestrator | 2026-01-01 04:21:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:32.257649 | orchestrator | 2026-01-01 04:21:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:35.303962 | orchestrator | 2026-01-01 04:21:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:35.304645 | orchestrator | 2026-01-01 04:21:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:35.304696 | orchestrator | 2026-01-01 04:21:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:38.348221 | orchestrator | 2026-01-01 04:21:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:38.350302 | orchestrator | 2026-01-01 04:21:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:38.350378 | orchestrator | 2026-01-01 04:21:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:41.401724 | orchestrator | 2026-01-01 04:21:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:41.404934 | orchestrator | 2026-01-01 04:21:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:41.404965 | orchestrator | 2026-01-01 04:21:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:44.451397 | orchestrator | 2026-01-01 04:21:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:44.453893 | orchestrator | 2026-01-01 04:21:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:44.453952 | orchestrator | 2026-01-01 04:21:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:47.504464 | orchestrator | 2026-01-01 04:21:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:47.506010 | orchestrator | 2026-01-01 04:21:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:47.506132 | orchestrator | 2026-01-01 04:21:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:50.557244 | orchestrator | 2026-01-01 04:21:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:50.558811 | orchestrator | 2026-01-01 04:21:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:50.558837 | orchestrator | 2026-01-01 04:21:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:53.602730 | orchestrator | 2026-01-01 04:21:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:53.606340 | orchestrator | 2026-01-01 04:21:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:53.606377 | orchestrator | 2026-01-01 04:21:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:56.649390 | orchestrator | 2026-01-01 04:21:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:56.651974 | orchestrator | 2026-01-01 04:21:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:56.652016 | orchestrator | 2026-01-01 04:21:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:21:59.696947 | orchestrator | 2026-01-01 04:21:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:21:59.697055 | orchestrator | 2026-01-01 04:21:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:21:59.697072 | orchestrator | 2026-01-01 04:21:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:02.742359 | orchestrator | 2026-01-01 04:22:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:02.743766 | orchestrator | 2026-01-01 04:22:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:02.743819 | orchestrator | 2026-01-01 04:22:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:05.788529 | orchestrator | 2026-01-01 04:22:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:05.789877 | orchestrator | 2026-01-01 04:22:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:05.789914 | orchestrator | 2026-01-01 04:22:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:08.834910 | orchestrator | 2026-01-01 04:22:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:08.835577 | orchestrator | 2026-01-01 04:22:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:08.835611 | orchestrator | 2026-01-01 04:22:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:11.881109 | orchestrator | 2026-01-01 04:22:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:11.884268 | orchestrator | 2026-01-01 04:22:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:11.884358 | orchestrator | 2026-01-01 04:22:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:14.930802 | orchestrator | 2026-01-01 04:22:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:14.933974 | orchestrator | 2026-01-01 04:22:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:14.934075 | orchestrator | 2026-01-01 04:22:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:17.976117 | orchestrator | 2026-01-01 04:22:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:17.978314 | orchestrator | 2026-01-01 04:22:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:17.978347 | orchestrator | 2026-01-01 04:22:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:21.023645 | orchestrator | 2026-01-01 04:22:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:21.026987 | orchestrator | 2026-01-01 04:22:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:21.027042 | orchestrator | 2026-01-01 04:22:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:24.075214 | orchestrator | 2026-01-01 04:22:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:24.076996 | orchestrator | 2026-01-01 04:22:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:24.077010 | orchestrator | 2026-01-01 04:22:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:27.127510 | orchestrator | 2026-01-01 04:22:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:27.130635 | orchestrator | 2026-01-01 04:22:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:27.130709 | orchestrator | 2026-01-01 04:22:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:30.168250 | orchestrator | 2026-01-01 04:22:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:30.170432 | orchestrator | 2026-01-01 04:22:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:30.170463 | orchestrator | 2026-01-01 04:22:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:33.207766 | orchestrator | 2026-01-01 04:22:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:33.211045 | orchestrator | 2026-01-01 04:22:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:33.211081 | orchestrator | 2026-01-01 04:22:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:36.252151 | orchestrator | 2026-01-01 04:22:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:36.252570 | orchestrator | 2026-01-01 04:22:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:36.252602 | orchestrator | 2026-01-01 04:22:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:39.298670 | orchestrator | 2026-01-01 04:22:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:39.300770 | orchestrator | 2026-01-01 04:22:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:39.301574 | orchestrator | 2026-01-01 04:22:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:42.361879 | orchestrator | 2026-01-01 04:22:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:42.365614 | orchestrator | 2026-01-01 04:22:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:42.365660 | orchestrator | 2026-01-01 04:22:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:45.405868 | orchestrator | 2026-01-01 04:22:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:45.408116 | orchestrator | 2026-01-01 04:22:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:45.408165 | orchestrator | 2026-01-01 04:22:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:48.458503 | orchestrator | 2026-01-01 04:22:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:48.459804 | orchestrator | 2026-01-01 04:22:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:48.459840 | orchestrator | 2026-01-01 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:51.503130 | orchestrator | 2026-01-01 04:22:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:51.503927 | orchestrator | 2026-01-01 04:22:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:51.503961 | orchestrator | 2026-01-01 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:54.555353 | orchestrator | 2026-01-01 04:22:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:54.557018 | orchestrator | 2026-01-01 04:22:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:54.557573 | orchestrator | 2026-01-01 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:22:57.603465 | orchestrator | 2026-01-01 04:22:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:22:57.605056 | orchestrator | 2026-01-01 04:22:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:22:57.605087 | orchestrator | 2026-01-01 04:22:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:00.649194 | orchestrator | 2026-01-01 04:23:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:00.651691 | orchestrator | 2026-01-01 04:23:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:00.651755 | orchestrator | 2026-01-01 04:23:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:03.691553 | orchestrator | 2026-01-01 04:23:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:03.692869 | orchestrator | 2026-01-01 04:23:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:03.692903 | orchestrator | 2026-01-01 04:23:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:06.740101 | orchestrator | 2026-01-01 04:23:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:06.742246 | orchestrator | 2026-01-01 04:23:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:06.742449 | orchestrator | 2026-01-01 04:23:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:09.790108 | orchestrator | 2026-01-01 04:23:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:09.791778 | orchestrator | 2026-01-01 04:23:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:09.791809 | orchestrator | 2026-01-01 04:23:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:12.846392 | orchestrator | 2026-01-01 04:23:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:12.848570 | orchestrator | 2026-01-01 04:23:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:12.848693 | orchestrator | 2026-01-01 04:23:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:15.894900 | orchestrator | 2026-01-01 04:23:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:15.897015 | orchestrator | 2026-01-01 04:23:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:15.897050 | orchestrator | 2026-01-01 04:23:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:18.944446 | orchestrator | 2026-01-01 04:23:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:18.947956 | orchestrator | 2026-01-01 04:23:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:18.948036 | orchestrator | 2026-01-01 04:23:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:21.997966 | orchestrator | 2026-01-01 04:23:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:21.999521 | orchestrator | 2026-01-01 04:23:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:21.999565 | orchestrator | 2026-01-01 04:23:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:25.042531 | orchestrator | 2026-01-01 04:23:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:25.044086 | orchestrator | 2026-01-01 04:23:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:25.044117 | orchestrator | 2026-01-01 04:23:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:28.091064 | orchestrator | 2026-01-01 04:23:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:28.092792 | orchestrator | 2026-01-01 04:23:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:28.092833 | orchestrator | 2026-01-01 04:23:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:31.144635 | orchestrator | 2026-01-01 04:23:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:31.146840 | orchestrator | 2026-01-01 04:23:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:31.146889 | orchestrator | 2026-01-01 04:23:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:34.187892 | orchestrator | 2026-01-01 04:23:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:34.190238 | orchestrator | 2026-01-01 04:23:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:34.190282 | orchestrator | 2026-01-01 04:23:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:37.239924 | orchestrator | 2026-01-01 04:23:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:37.241449 | orchestrator | 2026-01-01 04:23:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:37.241534 | orchestrator | 2026-01-01 04:23:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:40.279480 | orchestrator | 2026-01-01 04:23:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:40.280836 | orchestrator | 2026-01-01 04:23:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:40.280873 | orchestrator | 2026-01-01 04:23:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:43.338647 | orchestrator | 2026-01-01 04:23:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:43.338904 | orchestrator | 2026-01-01 04:23:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:43.338930 | orchestrator | 2026-01-01 04:23:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:46.392404 | orchestrator | 2026-01-01 04:23:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:46.393743 | orchestrator | 2026-01-01 04:23:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:46.393787 | orchestrator | 2026-01-01 04:23:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:49.443882 | orchestrator | 2026-01-01 04:23:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:49.445485 | orchestrator | 2026-01-01 04:23:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:49.445519 | orchestrator | 2026-01-01 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:52.486746 | orchestrator | 2026-01-01 04:23:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:52.487489 | orchestrator | 2026-01-01 04:23:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:52.487699 | orchestrator | 2026-01-01 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:55.531384 | orchestrator | 2026-01-01 04:23:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:55.531613 | orchestrator | 2026-01-01 04:23:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:55.531642 | orchestrator | 2026-01-01 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:23:58.575789 | orchestrator | 2026-01-01 04:23:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:23:58.578111 | orchestrator | 2026-01-01 04:23:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:23:58.578149 | orchestrator | 2026-01-01 04:23:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:01.624877 | orchestrator | 2026-01-01 04:24:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:01.626438 | orchestrator | 2026-01-01 04:24:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:01.626472 | orchestrator | 2026-01-01 04:24:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:04.676753 | orchestrator | 2026-01-01 04:24:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:04.678932 | orchestrator | 2026-01-01 04:24:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:04.678954 | orchestrator | 2026-01-01 04:24:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:07.724419 | orchestrator | 2026-01-01 04:24:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:07.725079 | orchestrator | 2026-01-01 04:24:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:07.725204 | orchestrator | 2026-01-01 04:24:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:10.780668 | orchestrator | 2026-01-01 04:24:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:10.781926 | orchestrator | 2026-01-01 04:24:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:10.782331 | orchestrator | 2026-01-01 04:24:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:13.836032 | orchestrator | 2026-01-01 04:24:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:13.836381 | orchestrator | 2026-01-01 04:24:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:13.836425 | orchestrator | 2026-01-01 04:24:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:16.893492 | orchestrator | 2026-01-01 04:24:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:16.896725 | orchestrator | 2026-01-01 04:24:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:16.896811 | orchestrator | 2026-01-01 04:24:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:19.950082 | orchestrator | 2026-01-01 04:24:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:19.952293 | orchestrator | 2026-01-01 04:24:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:19.952427 | orchestrator | 2026-01-01 04:24:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:22.995203 | orchestrator | 2026-01-01 04:24:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:22.998285 | orchestrator | 2026-01-01 04:24:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:22.998859 | orchestrator | 2026-01-01 04:24:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:26.050881 | orchestrator | 2026-01-01 04:24:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:26.051777 | orchestrator | 2026-01-01 04:24:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:26.052153 | orchestrator | 2026-01-01 04:24:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:29.099463 | orchestrator | 2026-01-01 04:24:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:29.101217 | orchestrator | 2026-01-01 04:24:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:29.101420 | orchestrator | 2026-01-01 04:24:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:32.145922 | orchestrator | 2026-01-01 04:24:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:32.146879 | orchestrator | 2026-01-01 04:24:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:32.146906 | orchestrator | 2026-01-01 04:24:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:35.195513 | orchestrator | 2026-01-01 04:24:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:35.198731 | orchestrator | 2026-01-01 04:24:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:35.198806 | orchestrator | 2026-01-01 04:24:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:38.253537 | orchestrator | 2026-01-01 04:24:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:38.256739 | orchestrator | 2026-01-01 04:24:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:38.256787 | orchestrator | 2026-01-01 04:24:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:41.311910 | orchestrator | 2026-01-01 04:24:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:41.314130 | orchestrator | 2026-01-01 04:24:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:41.314186 | orchestrator | 2026-01-01 04:24:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:44.360576 | orchestrator | 2026-01-01 04:24:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:44.364521 | orchestrator | 2026-01-01 04:24:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:44.364581 | orchestrator | 2026-01-01 04:24:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:47.407416 | orchestrator | 2026-01-01 04:24:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:47.410237 | orchestrator | 2026-01-01 04:24:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:47.410640 | orchestrator | 2026-01-01 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:50.459180 | orchestrator | 2026-01-01 04:24:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:50.461981 | orchestrator | 2026-01-01 04:24:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:50.462058 | orchestrator | 2026-01-01 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:53.517181 | orchestrator | 2026-01-01 04:24:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:53.519654 | orchestrator | 2026-01-01 04:24:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:53.519917 | orchestrator | 2026-01-01 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:56.570116 | orchestrator | 2026-01-01 04:24:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:56.572567 | orchestrator | 2026-01-01 04:24:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:56.572632 | orchestrator | 2026-01-01 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:24:59.623542 | orchestrator | 2026-01-01 04:24:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:24:59.626363 | orchestrator | 2026-01-01 04:24:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:24:59.626486 | orchestrator | 2026-01-01 04:24:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:02.678726 | orchestrator | 2026-01-01 04:25:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:02.682329 | orchestrator | 2026-01-01 04:25:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:02.682430 | orchestrator | 2026-01-01 04:25:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:05.732995 | orchestrator | 2026-01-01 04:25:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:05.737010 | orchestrator | 2026-01-01 04:25:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:05.737252 | orchestrator | 2026-01-01 04:25:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:08.778279 | orchestrator | 2026-01-01 04:25:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:08.778747 | orchestrator | 2026-01-01 04:25:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:08.778787 | orchestrator | 2026-01-01 04:25:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:11.832398 | orchestrator | 2026-01-01 04:25:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:11.834439 | orchestrator | 2026-01-01 04:25:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:11.835168 | orchestrator | 2026-01-01 04:25:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:14.890871 | orchestrator | 2026-01-01 04:25:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:14.893938 | orchestrator | 2026-01-01 04:25:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:14.893974 | orchestrator | 2026-01-01 04:25:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:17.935831 | orchestrator | 2026-01-01 04:25:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:17.936688 | orchestrator | 2026-01-01 04:25:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:17.936797 | orchestrator | 2026-01-01 04:25:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:20.984541 | orchestrator | 2026-01-01 04:25:20 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:20.985690 | orchestrator | 2026-01-01 04:25:20 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:20.985718 | orchestrator | 2026-01-01 04:25:20 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:24.031156 | orchestrator | 2026-01-01 04:25:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:24.032253 | orchestrator | 2026-01-01 04:25:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:24.032282 | orchestrator | 2026-01-01 04:25:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:27.085022 | orchestrator | 2026-01-01 04:25:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:27.087017 | orchestrator | 2026-01-01 04:25:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:27.087053 | orchestrator | 2026-01-01 04:25:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:30.131735 | orchestrator | 2026-01-01 04:25:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:30.132404 | orchestrator | 2026-01-01 04:25:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:30.132438 | orchestrator | 2026-01-01 04:25:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:33.172821 | orchestrator | 2026-01-01 04:25:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:33.173780 | orchestrator | 2026-01-01 04:25:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:33.173803 | orchestrator | 2026-01-01 04:25:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:36.230659 | orchestrator | 2026-01-01 04:25:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:36.230840 | orchestrator | 2026-01-01 04:25:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:36.230906 | orchestrator | 2026-01-01 04:25:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:39.285214 | orchestrator | 2026-01-01 04:25:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:39.287842 | orchestrator | 2026-01-01 04:25:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:39.287922 | orchestrator | 2026-01-01 04:25:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:42.327430 | orchestrator | 2026-01-01 04:25:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:42.329197 | orchestrator | 2026-01-01 04:25:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:42.329261 | orchestrator | 2026-01-01 04:25:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:45.370934 | orchestrator | 2026-01-01 04:25:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:45.372075 | orchestrator | 2026-01-01 04:25:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:45.372151 | orchestrator | 2026-01-01 04:25:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:48.418918 | orchestrator | 2026-01-01 04:25:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:48.421492 | orchestrator | 2026-01-01 04:25:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:48.421527 | orchestrator | 2026-01-01 04:25:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:51.471724 | orchestrator | 2026-01-01 04:25:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:51.475151 | orchestrator | 2026-01-01 04:25:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:51.475197 | orchestrator | 2026-01-01 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:54.524112 | orchestrator | 2026-01-01 04:25:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:54.526459 | orchestrator | 2026-01-01 04:25:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:54.526504 | orchestrator | 2026-01-01 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:25:57.571260 | orchestrator | 2026-01-01 04:25:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:25:57.573164 | orchestrator | 2026-01-01 04:25:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:25:57.573194 | orchestrator | 2026-01-01 04:25:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:00.628406 | orchestrator | 2026-01-01 04:26:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:00.629479 | orchestrator | 2026-01-01 04:26:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:00.629503 | orchestrator | 2026-01-01 04:26:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:03.677665 | orchestrator | 2026-01-01 04:26:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:03.678909 | orchestrator | 2026-01-01 04:26:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:03.678975 | orchestrator | 2026-01-01 04:26:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:06.730262 | orchestrator | 2026-01-01 04:26:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:06.732153 | orchestrator | 2026-01-01 04:26:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:06.732349 | orchestrator | 2026-01-01 04:26:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:09.780956 | orchestrator | 2026-01-01 04:26:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:09.782765 | orchestrator | 2026-01-01 04:26:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:09.782806 | orchestrator | 2026-01-01 04:26:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:12.838285 | orchestrator | 2026-01-01 04:26:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:12.840674 | orchestrator | 2026-01-01 04:26:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:12.840731 | orchestrator | 2026-01-01 04:26:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:15.895078 | orchestrator | 2026-01-01 04:26:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:15.897673 | orchestrator | 2026-01-01 04:26:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:15.897722 | orchestrator | 2026-01-01 04:26:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:18.951830 | orchestrator | 2026-01-01 04:26:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:18.955755 | orchestrator | 2026-01-01 04:26:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:18.955795 | orchestrator | 2026-01-01 04:26:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:22.008696 | orchestrator | 2026-01-01 04:26:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:22.012517 | orchestrator | 2026-01-01 04:26:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:22.012596 | orchestrator | 2026-01-01 04:26:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:25.053431 | orchestrator | 2026-01-01 04:26:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:25.055819 | orchestrator | 2026-01-01 04:26:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:25.055859 | orchestrator | 2026-01-01 04:26:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:28.104387 | orchestrator | 2026-01-01 04:26:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:28.107501 | orchestrator | 2026-01-01 04:26:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:28.107542 | orchestrator | 2026-01-01 04:26:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:31.146941 | orchestrator | 2026-01-01 04:26:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:31.148607 | orchestrator | 2026-01-01 04:26:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:31.149036 | orchestrator | 2026-01-01 04:26:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:34.197100 | orchestrator | 2026-01-01 04:26:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:34.198319 | orchestrator | 2026-01-01 04:26:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:34.198544 | orchestrator | 2026-01-01 04:26:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:37.254882 | orchestrator | 2026-01-01 04:26:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:37.257361 | orchestrator | 2026-01-01 04:26:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:37.257379 | orchestrator | 2026-01-01 04:26:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:40.311406 | orchestrator | 2026-01-01 04:26:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:40.312754 | orchestrator | 2026-01-01 04:26:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:40.312798 | orchestrator | 2026-01-01 04:26:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:43.360505 | orchestrator | 2026-01-01 04:26:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:43.363937 | orchestrator | 2026-01-01 04:26:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:43.363994 | orchestrator | 2026-01-01 04:26:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:46.414410 | orchestrator | 2026-01-01 04:26:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:46.416138 | orchestrator | 2026-01-01 04:26:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:46.416187 | orchestrator | 2026-01-01 04:26:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:49.464781 | orchestrator | 2026-01-01 04:26:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:49.467900 | orchestrator | 2026-01-01 04:26:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:49.468002 | orchestrator | 2026-01-01 04:26:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:52.515720 | orchestrator | 2026-01-01 04:26:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:52.519430 | orchestrator | 2026-01-01 04:26:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:52.519786 | orchestrator | 2026-01-01 04:26:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:55.567058 | orchestrator | 2026-01-01 04:26:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:55.567725 | orchestrator | 2026-01-01 04:26:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:55.567922 | orchestrator | 2026-01-01 04:26:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:26:58.612104 | orchestrator | 2026-01-01 04:26:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:26:58.612514 | orchestrator | 2026-01-01 04:26:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:26:58.612559 | orchestrator | 2026-01-01 04:26:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:01.660059 | orchestrator | 2026-01-01 04:27:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:01.662184 | orchestrator | 2026-01-01 04:27:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:01.662265 | orchestrator | 2026-01-01 04:27:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:04.721606 | orchestrator | 2026-01-01 04:27:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:04.724750 | orchestrator | 2026-01-01 04:27:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:04.724788 | orchestrator | 2026-01-01 04:27:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:07.785198 | orchestrator | 2026-01-01 04:27:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:07.786948 | orchestrator | 2026-01-01 04:27:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:07.787093 | orchestrator | 2026-01-01 04:27:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:10.829494 | orchestrator | 2026-01-01 04:27:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:10.832818 | orchestrator | 2026-01-01 04:27:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:10.832853 | orchestrator | 2026-01-01 04:27:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:13.883765 | orchestrator | 2026-01-01 04:27:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:13.884707 | orchestrator | 2026-01-01 04:27:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:13.884930 | orchestrator | 2026-01-01 04:27:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:16.933327 | orchestrator | 2026-01-01 04:27:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:16.934640 | orchestrator | 2026-01-01 04:27:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:16.934662 | orchestrator | 2026-01-01 04:27:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:19.986873 | orchestrator | 2026-01-01 04:27:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:19.987097 | orchestrator | 2026-01-01 04:27:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:19.987122 | orchestrator | 2026-01-01 04:27:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:23.045960 | orchestrator | 2026-01-01 04:27:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:23.048126 | orchestrator | 2026-01-01 04:27:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:23.049868 | orchestrator | 2026-01-01 04:27:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:26.088073 | orchestrator | 2026-01-01 04:27:26 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:26.089540 | orchestrator | 2026-01-01 04:27:26 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:26.089567 | orchestrator | 2026-01-01 04:27:26 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:29.137041 | orchestrator | 2026-01-01 04:27:29 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:29.139571 | orchestrator | 2026-01-01 04:27:29 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:29.139627 | orchestrator | 2026-01-01 04:27:29 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:32.187757 | orchestrator | 2026-01-01 04:27:32 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:32.188789 | orchestrator | 2026-01-01 04:27:32 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:32.188845 | orchestrator | 2026-01-01 04:27:32 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:35.238258 | orchestrator | 2026-01-01 04:27:35 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:35.241814 | orchestrator | 2026-01-01 04:27:35 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:35.241941 | orchestrator | 2026-01-01 04:27:35 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:38.288539 | orchestrator | 2026-01-01 04:27:38 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:38.291819 | orchestrator | 2026-01-01 04:27:38 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:38.291861 | orchestrator | 2026-01-01 04:27:38 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:41.343591 | orchestrator | 2026-01-01 04:27:41 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:41.345090 | orchestrator | 2026-01-01 04:27:41 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:41.345122 | orchestrator | 2026-01-01 04:27:41 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:44.402097 | orchestrator | 2026-01-01 04:27:44 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:44.405487 | orchestrator | 2026-01-01 04:27:44 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:44.405524 | orchestrator | 2026-01-01 04:27:44 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:47.446642 | orchestrator | 2026-01-01 04:27:47 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:47.448900 | orchestrator | 2026-01-01 04:27:47 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:47.448931 | orchestrator | 2026-01-01 04:27:47 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:50.503398 | orchestrator | 2026-01-01 04:27:50 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:50.505420 | orchestrator | 2026-01-01 04:27:50 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:50.505456 | orchestrator | 2026-01-01 04:27:50 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:53.551030 | orchestrator | 2026-01-01 04:27:53 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:53.552497 | orchestrator | 2026-01-01 04:27:53 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:53.552552 | orchestrator | 2026-01-01 04:27:53 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:56.607762 | orchestrator | 2026-01-01 04:27:56 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:56.609866 | orchestrator | 2026-01-01 04:27:56 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:56.609923 | orchestrator | 2026-01-01 04:27:56 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:27:59.663510 | orchestrator | 2026-01-01 04:27:59 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:27:59.666900 | orchestrator | 2026-01-01 04:27:59 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:27:59.666932 | orchestrator | 2026-01-01 04:27:59 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:02.714396 | orchestrator | 2026-01-01 04:28:02 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:02.716554 | orchestrator | 2026-01-01 04:28:02 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:02.716685 | orchestrator | 2026-01-01 04:28:02 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:05.772222 | orchestrator | 2026-01-01 04:28:05 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:05.774173 | orchestrator | 2026-01-01 04:28:05 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:05.774224 | orchestrator | 2026-01-01 04:28:05 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:08.814810 | orchestrator | 2026-01-01 04:28:08 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:08.819041 | orchestrator | 2026-01-01 04:28:08 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:08.819279 | orchestrator | 2026-01-01 04:28:08 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:11.866636 | orchestrator | 2026-01-01 04:28:11 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:11.868597 | orchestrator | 2026-01-01 04:28:11 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:11.868654 | orchestrator | 2026-01-01 04:28:11 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:14.914328 | orchestrator | 2026-01-01 04:28:14 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:14.917183 | orchestrator | 2026-01-01 04:28:14 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:14.917573 | orchestrator | 2026-01-01 04:28:14 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:17.963960 | orchestrator | 2026-01-01 04:28:17 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:17.965211 | orchestrator | 2026-01-01 04:28:17 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:17.965230 | orchestrator | 2026-01-01 04:28:17 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:21.015887 | orchestrator | 2026-01-01 04:28:21 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:21.016929 | orchestrator | 2026-01-01 04:28:21 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:21.016966 | orchestrator | 2026-01-01 04:28:21 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:24.061852 | orchestrator | 2026-01-01 04:28:24 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:24.062817 | orchestrator | 2026-01-01 04:28:24 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:24.062877 | orchestrator | 2026-01-01 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:27.109720 | orchestrator | 2026-01-01 04:28:27 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:27.111243 | orchestrator | 2026-01-01 04:28:27 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:27.111442 | orchestrator | 2026-01-01 04:28:27 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:30.161532 | orchestrator | 2026-01-01 04:28:30 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:30.162852 | orchestrator | 2026-01-01 04:28:30 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:30.162930 | orchestrator | 2026-01-01 04:28:30 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:33.208444 | orchestrator | 2026-01-01 04:28:33 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:33.209619 | orchestrator | 2026-01-01 04:28:33 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:33.209700 | orchestrator | 2026-01-01 04:28:33 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:36.259231 | orchestrator | 2026-01-01 04:28:36 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:36.260775 | orchestrator | 2026-01-01 04:28:36 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:36.260840 | orchestrator | 2026-01-01 04:28:36 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:39.307726 | orchestrator | 2026-01-01 04:28:39 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:39.309704 | orchestrator | 2026-01-01 04:28:39 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:39.309788 | orchestrator | 2026-01-01 04:28:39 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:42.355221 | orchestrator | 2026-01-01 04:28:42 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:42.358189 | orchestrator | 2026-01-01 04:28:42 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:42.358210 | orchestrator | 2026-01-01 04:28:42 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:45.420140 | orchestrator | 2026-01-01 04:28:45 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:45.422521 | orchestrator | 2026-01-01 04:28:45 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:45.422567 | orchestrator | 2026-01-01 04:28:45 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:48.471580 | orchestrator | 2026-01-01 04:28:48 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:48.472293 | orchestrator | 2026-01-01 04:28:48 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:48.472325 | orchestrator | 2026-01-01 04:28:48 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:51.524664 | orchestrator | 2026-01-01 04:28:51 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:51.527159 | orchestrator | 2026-01-01 04:28:51 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:51.527234 | orchestrator | 2026-01-01 04:28:51 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:54.574504 | orchestrator | 2026-01-01 04:28:54 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:54.575930 | orchestrator | 2026-01-01 04:28:54 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:54.576152 | orchestrator | 2026-01-01 04:28:54 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:28:57.628973 | orchestrator | 2026-01-01 04:28:57 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:28:57.633277 | orchestrator | 2026-01-01 04:28:57 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:28:57.633327 | orchestrator | 2026-01-01 04:28:57 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:00.680237 | orchestrator | 2026-01-01 04:29:00 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:00.681330 | orchestrator | 2026-01-01 04:29:00 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:00.681392 | orchestrator | 2026-01-01 04:29:00 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:03.727938 | orchestrator | 2026-01-01 04:29:03 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:03.728682 | orchestrator | 2026-01-01 04:29:03 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:03.728911 | orchestrator | 2026-01-01 04:29:03 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:06.773460 | orchestrator | 2026-01-01 04:29:06 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:06.774910 | orchestrator | 2026-01-01 04:29:06 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:06.774977 | orchestrator | 2026-01-01 04:29:06 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:09.812259 | orchestrator | 2026-01-01 04:29:09 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:09.815285 | orchestrator | 2026-01-01 04:29:09 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:09.815674 | orchestrator | 2026-01-01 04:29:09 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:12.861713 | orchestrator | 2026-01-01 04:29:12 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:12.864708 | orchestrator | 2026-01-01 04:29:12 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:12.864748 | orchestrator | 2026-01-01 04:29:12 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:15.910936 | orchestrator | 2026-01-01 04:29:15 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:15.913336 | orchestrator | 2026-01-01 04:29:15 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:15.913455 | orchestrator | 2026-01-01 04:29:15 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:18.960279 | orchestrator | 2026-01-01 04:29:18 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:18.964332 | orchestrator | 2026-01-01 04:29:18 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:18.964494 | orchestrator | 2026-01-01 04:29:18 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:22.021013 | orchestrator | 2026-01-01 04:29:22 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:22.022484 | orchestrator | 2026-01-01 04:29:22 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:22.022520 | orchestrator | 2026-01-01 04:29:22 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:25.078210 | orchestrator | 2026-01-01 04:29:25 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:25.079074 | orchestrator | 2026-01-01 04:29:25 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:25.079118 | orchestrator | 2026-01-01 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:28.130265 | orchestrator | 2026-01-01 04:29:28 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:28.131777 | orchestrator | 2026-01-01 04:29:28 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:28.131809 | orchestrator | 2026-01-01 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:31.187114 | orchestrator | 2026-01-01 04:29:31 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:31.188921 | orchestrator | 2026-01-01 04:29:31 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:31.189270 | orchestrator | 2026-01-01 04:29:31 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:34.241032 | orchestrator | 2026-01-01 04:29:34 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:34.242324 | orchestrator | 2026-01-01 04:29:34 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:34.242509 | orchestrator | 2026-01-01 04:29:34 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:37.296874 | orchestrator | 2026-01-01 04:29:37 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:37.296947 | orchestrator | 2026-01-01 04:29:37 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:37.296953 | orchestrator | 2026-01-01 04:29:37 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:40.329028 | orchestrator | 2026-01-01 04:29:40 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:40.329631 | orchestrator | 2026-01-01 04:29:40 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:40.329669 | orchestrator | 2026-01-01 04:29:40 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:43.376473 | orchestrator | 2026-01-01 04:29:43 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:43.377943 | orchestrator | 2026-01-01 04:29:43 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:43.378068 | orchestrator | 2026-01-01 04:29:43 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:46.419681 | orchestrator | 2026-01-01 04:29:46 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:46.421701 | orchestrator | 2026-01-01 04:29:46 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:46.421734 | orchestrator | 2026-01-01 04:29:46 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:49.473331 | orchestrator | 2026-01-01 04:29:49 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:49.474713 | orchestrator | 2026-01-01 04:29:49 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:49.474779 | orchestrator | 2026-01-01 04:29:49 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:52.536504 | orchestrator | 2026-01-01 04:29:52 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:52.538258 | orchestrator | 2026-01-01 04:29:52 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:52.538289 | orchestrator | 2026-01-01 04:29:52 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:55.585276 | orchestrator | 2026-01-01 04:29:55 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:55.586563 | orchestrator | 2026-01-01 04:29:55 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:55.586662 | orchestrator | 2026-01-01 04:29:55 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:29:58.633924 | orchestrator | 2026-01-01 04:29:58 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:29:58.635467 | orchestrator | 2026-01-01 04:29:58 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:29:58.635508 | orchestrator | 2026-01-01 04:29:58 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:01.684037 | orchestrator | 2026-01-01 04:30:01 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:01.685841 | orchestrator | 2026-01-01 04:30:01 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:01.685907 | orchestrator | 2026-01-01 04:30:01 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:04.737950 | orchestrator | 2026-01-01 04:30:04 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:04.739770 | orchestrator | 2026-01-01 04:30:04 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:04.739884 | orchestrator | 2026-01-01 04:30:04 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:07.790559 | orchestrator | 2026-01-01 04:30:07 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:07.791592 | orchestrator | 2026-01-01 04:30:07 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:07.791792 | orchestrator | 2026-01-01 04:30:07 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:10.841025 | orchestrator | 2026-01-01 04:30:10 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:10.842214 | orchestrator | 2026-01-01 04:30:10 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:10.842243 | orchestrator | 2026-01-01 04:30:10 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:13.887662 | orchestrator | 2026-01-01 04:30:13 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:13.890116 | orchestrator | 2026-01-01 04:30:13 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:13.890426 | orchestrator | 2026-01-01 04:30:13 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:16.933192 | orchestrator | 2026-01-01 04:30:16 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:16.934602 | orchestrator | 2026-01-01 04:30:16 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:16.934616 | orchestrator | 2026-01-01 04:30:16 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:19.981534 | orchestrator | 2026-01-01 04:30:19 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:19.983528 | orchestrator | 2026-01-01 04:30:19 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:19.983549 | orchestrator | 2026-01-01 04:30:19 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:23.042656 | orchestrator | 2026-01-01 04:30:23 | INFO  | Task a4fc41ac-03c2-4b4b-a16f-cd96d2a6cd20 is in state STARTED 2026-01-01 04:30:23.045001 | orchestrator | 2026-01-01 04:30:23 | INFO  | Task 8e792a69-0260-4269-a3ca-ead7b2153645 is in state STARTED 2026-01-01 04:30:23.045073 | orchestrator | 2026-01-01 04:30:23 | INFO  | Wait 1 second(s) until the next check 2026-01-01 04:30:23.447958 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-01 04:30:23.449681 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-01 04:30:24.287896 | 2026-01-01 04:30:24.288083 | PLAY [Post output play] 2026-01-01 04:30:24.308850 | 2026-01-01 04:30:24.309015 | LOOP [stage-output : Register sources] 2026-01-01 04:30:24.375982 | 2026-01-01 04:30:24.376268 | TASK [stage-output : Check sudo] 2026-01-01 04:30:25.286310 | orchestrator | sudo: a password is required 2026-01-01 04:30:25.417774 | orchestrator | ok: Runtime: 0:00:00.015654 2026-01-01 04:30:25.430293 | 2026-01-01 04:30:25.430502 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-01 04:30:25.469834 | 2026-01-01 04:30:25.470061 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-01 04:30:25.570303 | orchestrator | ok 2026-01-01 04:30:25.577164 | 2026-01-01 04:30:25.577368 | LOOP [stage-output : Ensure target folders exist] 2026-01-01 04:30:26.035726 | orchestrator | ok: "docs" 2026-01-01 04:30:26.036056 | 2026-01-01 04:30:26.325782 | orchestrator | ok: "artifacts" 2026-01-01 04:30:26.587592 | orchestrator | ok: "logs" 2026-01-01 04:30:26.611759 | 2026-01-01 04:30:26.611965 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-01 04:30:26.653786 | 2026-01-01 04:30:26.654100 | TASK [stage-output : Make all log files readable] 2026-01-01 04:30:26.951535 | orchestrator | ok 2026-01-01 04:30:26.957958 | 2026-01-01 04:30:26.958081 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-01 04:30:26.992925 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:27.002621 | 2026-01-01 04:30:27.002860 | TASK [stage-output : Discover log files for compression] 2026-01-01 04:30:27.027192 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:27.043459 | 2026-01-01 04:30:27.043653 | LOOP [stage-output : Archive everything from logs] 2026-01-01 04:30:27.086571 | 2026-01-01 04:30:27.086750 | PLAY [Post cleanup play] 2026-01-01 04:30:27.095223 | 2026-01-01 04:30:27.095402 | TASK [Set cloud fact (Zuul deployment)] 2026-01-01 04:30:27.153155 | orchestrator | ok 2026-01-01 04:30:27.169910 | 2026-01-01 04:30:27.170090 | TASK [Set cloud fact (local deployment)] 2026-01-01 04:30:27.205091 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:27.221035 | 2026-01-01 04:30:27.221270 | TASK [Clean the cloud environment] 2026-01-01 04:30:29.006129 | orchestrator | 2026-01-01 04:30:29 - clean up servers 2026-01-01 04:30:29.902452 | orchestrator | 2026-01-01 04:30:29 - testbed-manager 2026-01-01 04:30:29.984550 | orchestrator | 2026-01-01 04:30:29 - testbed-node-5 2026-01-01 04:30:30.065583 | orchestrator | 2026-01-01 04:30:30 - testbed-node-3 2026-01-01 04:30:30.152467 | orchestrator | 2026-01-01 04:30:30 - testbed-node-2 2026-01-01 04:30:30.238455 | orchestrator | 2026-01-01 04:30:30 - testbed-node-4 2026-01-01 04:30:30.334542 | orchestrator | 2026-01-01 04:30:30 - testbed-node-0 2026-01-01 04:30:30.426063 | orchestrator | 2026-01-01 04:30:30 - testbed-node-1 2026-01-01 04:30:30.510107 | orchestrator | 2026-01-01 04:30:30 - clean up keypairs 2026-01-01 04:30:30.526901 | orchestrator | 2026-01-01 04:30:30 - testbed 2026-01-01 04:30:30.551664 | orchestrator | 2026-01-01 04:30:30 - wait for servers to be gone 2026-01-01 04:30:44.125155 | orchestrator | 2026-01-01 04:30:44 - clean up ports 2026-01-01 04:30:44.306633 | orchestrator | 2026-01-01 04:30:44 - 1e5844ab-2096-4950-a3ed-61b0042a41e3 2026-01-01 04:30:44.763965 | orchestrator | 2026-01-01 04:30:44 - 5a60e7ee-ac97-46b0-ab1c-d3f9ae52c40f 2026-01-01 04:30:45.072201 | orchestrator | 2026-01-01 04:30:45 - 742bb070-bee8-4b57-ba3d-9f6cae3370a9 2026-01-01 04:30:45.405192 | orchestrator | 2026-01-01 04:30:45 - 92135b66-69ae-4c62-bde3-9606ae0df9ae 2026-01-01 04:30:45.644611 | orchestrator | 2026-01-01 04:30:45 - b4552696-995b-4e0d-9655-7efcc471d34f 2026-01-01 04:30:45.933193 | orchestrator | 2026-01-01 04:30:45 - be1d19ae-4289-4349-9aa2-1cb962c126d7 2026-01-01 04:30:46.139458 | orchestrator | 2026-01-01 04:30:46 - e465e469-574b-4def-9ffa-282683733d9b 2026-01-01 04:30:46.381898 | orchestrator | 2026-01-01 04:30:46 - clean up volumes 2026-01-01 04:30:46.672926 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-1-node-base 2026-01-01 04:30:46.715761 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-2-node-base 2026-01-01 04:30:46.761735 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-5-node-base 2026-01-01 04:30:46.804247 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-3-node-base 2026-01-01 04:30:46.845929 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-4-node-base 2026-01-01 04:30:46.898994 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-0-node-base 2026-01-01 04:30:46.950753 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-manager-base 2026-01-01 04:30:46.995030 | orchestrator | 2026-01-01 04:30:46 - testbed-volume-2-node-5 2026-01-01 04:30:47.040502 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-8-node-5 2026-01-01 04:30:47.087663 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-4-node-4 2026-01-01 04:30:47.134191 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-5-node-5 2026-01-01 04:30:47.185747 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-6-node-3 2026-01-01 04:30:47.231879 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-1-node-4 2026-01-01 04:30:47.275984 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-3-node-3 2026-01-01 04:30:47.318721 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-7-node-4 2026-01-01 04:30:47.366700 | orchestrator | 2026-01-01 04:30:47 - testbed-volume-0-node-3 2026-01-01 04:30:47.422092 | orchestrator | 2026-01-01 04:30:47 - disconnect routers 2026-01-01 04:30:48.196426 | orchestrator | 2026-01-01 04:30:48 - testbed 2026-01-01 04:30:49.174183 | orchestrator | 2026-01-01 04:30:49 - clean up subnets 2026-01-01 04:30:49.233784 | orchestrator | 2026-01-01 04:30:49 - subnet-testbed-management 2026-01-01 04:30:49.413910 | orchestrator | 2026-01-01 04:30:49 - clean up networks 2026-01-01 04:30:49.603746 | orchestrator | 2026-01-01 04:30:49 - net-testbed-management 2026-01-01 04:30:50.620777 | orchestrator | 2026-01-01 04:30:50 - clean up security groups 2026-01-01 04:30:50.674952 | orchestrator | 2026-01-01 04:30:50 - testbed-management 2026-01-01 04:30:50.806754 | orchestrator | 2026-01-01 04:30:50 - testbed-node 2026-01-01 04:30:50.964977 | orchestrator | 2026-01-01 04:30:50 - clean up floating ips 2026-01-01 04:30:50.997885 | orchestrator | 2026-01-01 04:30:50 - 81.163.192.183 2026-01-01 04:30:51.410008 | orchestrator | 2026-01-01 04:30:51 - clean up routers 2026-01-01 04:30:51.517019 | orchestrator | 2026-01-01 04:30:51 - testbed 2026-01-01 04:30:52.794404 | orchestrator | ok: Runtime: 0:00:25.035439 2026-01-01 04:30:52.797226 | 2026-01-01 04:30:52.797385 | PLAY RECAP 2026-01-01 04:30:52.797475 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-01 04:30:52.797517 | 2026-01-01 04:30:52.954757 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-01 04:30:52.955837 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-01 04:30:53.736607 | 2026-01-01 04:30:53.736782 | PLAY [Cleanup play] 2026-01-01 04:30:53.753634 | 2026-01-01 04:30:53.753793 | TASK [Set cloud fact (Zuul deployment)] 2026-01-01 04:30:53.816426 | orchestrator | ok 2026-01-01 04:30:53.826271 | 2026-01-01 04:30:53.826472 | TASK [Set cloud fact (local deployment)] 2026-01-01 04:30:53.862099 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:53.882198 | 2026-01-01 04:30:53.882501 | TASK [Clean the cloud environment] 2026-01-01 04:30:55.058958 | orchestrator | 2026-01-01 04:30:55 - clean up servers 2026-01-01 04:30:55.664062 | orchestrator | 2026-01-01 04:30:55 - clean up keypairs 2026-01-01 04:30:55.684141 | orchestrator | 2026-01-01 04:30:55 - wait for servers to be gone 2026-01-01 04:30:55.736230 | orchestrator | 2026-01-01 04:30:55 - clean up ports 2026-01-01 04:30:55.811252 | orchestrator | 2026-01-01 04:30:55 - clean up volumes 2026-01-01 04:30:55.873478 | orchestrator | 2026-01-01 04:30:55 - disconnect routers 2026-01-01 04:30:55.895638 | orchestrator | 2026-01-01 04:30:55 - clean up subnets 2026-01-01 04:30:55.914228 | orchestrator | 2026-01-01 04:30:55 - clean up networks 2026-01-01 04:30:56.043476 | orchestrator | 2026-01-01 04:30:56 - clean up security groups 2026-01-01 04:30:56.086107 | orchestrator | 2026-01-01 04:30:56 - clean up floating ips 2026-01-01 04:30:56.111651 | orchestrator | 2026-01-01 04:30:56 - clean up routers 2026-01-01 04:30:56.435707 | orchestrator | ok: Runtime: 0:00:01.466971 2026-01-01 04:30:56.439535 | 2026-01-01 04:30:56.439709 | PLAY RECAP 2026-01-01 04:30:56.439843 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-01 04:30:56.439912 | 2026-01-01 04:30:56.576831 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-01 04:30:56.577891 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-01 04:30:57.344456 | 2026-01-01 04:30:57.344627 | PLAY [Base post-fetch] 2026-01-01 04:30:57.361268 | 2026-01-01 04:30:57.361441 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-01 04:30:57.417673 | orchestrator | skipping: Conditional result was False 2026-01-01 04:30:57.432944 | 2026-01-01 04:30:57.433202 | TASK [fetch-output : Set log path for single node] 2026-01-01 04:30:57.482491 | orchestrator | ok 2026-01-01 04:30:57.492530 | 2026-01-01 04:30:57.492761 | LOOP [fetch-output : Ensure local output dirs] 2026-01-01 04:30:58.053568 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/work/logs" 2026-01-01 04:30:58.355901 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/work/artifacts" 2026-01-01 04:30:58.669408 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1c6aefb8f75d46b4aa7685e460a319d2/work/docs" 2026-01-01 04:30:58.694525 | 2026-01-01 04:30:58.694760 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-01 04:30:59.730944 | orchestrator | changed: .d..t...... ./ 2026-01-01 04:30:59.731362 | orchestrator | changed: All items complete 2026-01-01 04:30:59.731447 | 2026-01-01 04:31:00.531755 | orchestrator | changed: .d..t...... ./ 2026-01-01 04:31:01.355491 | orchestrator | changed: .d..t...... ./ 2026-01-01 04:31:01.386136 | 2026-01-01 04:31:01.386297 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-01 04:31:01.426585 | orchestrator | skipping: Conditional result was False 2026-01-01 04:31:01.430698 | orchestrator | skipping: Conditional result was False 2026-01-01 04:31:01.441173 | 2026-01-01 04:31:01.441366 | PLAY RECAP 2026-01-01 04:31:01.441436 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-01 04:31:01.441463 | 2026-01-01 04:31:01.585608 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-01 04:31:01.589166 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-01 04:31:02.386828 | 2026-01-01 04:31:02.387064 | PLAY [Base post] 2026-01-01 04:31:02.402198 | 2026-01-01 04:31:02.402371 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-01 04:31:03.542103 | orchestrator | changed 2026-01-01 04:31:03.552072 | 2026-01-01 04:31:03.552211 | PLAY RECAP 2026-01-01 04:31:03.552277 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-01 04:31:03.552370 | 2026-01-01 04:31:03.689229 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-01 04:31:03.693127 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-01 04:31:04.539294 | 2026-01-01 04:31:04.539494 | PLAY [Base post-logs] 2026-01-01 04:31:04.552062 | 2026-01-01 04:31:04.552234 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-01 04:31:05.058811 | localhost | changed 2026-01-01 04:31:05.075174 | 2026-01-01 04:31:05.075381 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-01 04:31:05.113224 | localhost | ok 2026-01-01 04:31:05.118567 | 2026-01-01 04:31:05.118724 | TASK [Set zuul-log-path fact] 2026-01-01 04:31:05.135578 | localhost | ok 2026-01-01 04:31:05.151758 | 2026-01-01 04:31:05.151982 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-01 04:31:05.181650 | localhost | ok 2026-01-01 04:31:05.188304 | 2026-01-01 04:31:05.188502 | TASK [upload-logs : Create log directories] 2026-01-01 04:31:05.709177 | localhost | changed 2026-01-01 04:31:05.713729 | 2026-01-01 04:31:05.713891 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-01 04:31:06.215895 | localhost -> localhost | ok: Runtime: 0:00:00.007126 2026-01-01 04:31:06.220243 | 2026-01-01 04:31:06.220388 | TASK [upload-logs : Upload logs to log server] 2026-01-01 04:31:06.828899 | localhost | Output suppressed because no_log was given 2026-01-01 04:31:06.833166 | 2026-01-01 04:31:06.833432 | LOOP [upload-logs : Compress console log and json output] 2026-01-01 04:31:06.901168 | localhost | skipping: Conditional result was False 2026-01-01 04:31:06.906090 | localhost | skipping: Conditional result was False 2026-01-01 04:31:06.916537 | 2026-01-01 04:31:06.916714 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-01 04:31:06.967554 | localhost | skipping: Conditional result was False 2026-01-01 04:31:06.968128 | 2026-01-01 04:31:06.971421 | localhost | skipping: Conditional result was False 2026-01-01 04:31:06.978478 | 2026-01-01 04:31:06.978660 | LOOP [upload-logs : Upload console log and json output]